Feb 02 10:57:01 crc systemd[1]: Starting Kubernetes Kubelet... Feb 02 10:57:01 crc restorecon[4764]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:57:01 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:57:02 crc restorecon[4764]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:57:02 crc restorecon[4764]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 02 10:57:04 crc kubenswrapper[4925]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 10:57:04 crc kubenswrapper[4925]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 02 10:57:04 crc kubenswrapper[4925]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 10:57:04 crc kubenswrapper[4925]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 10:57:04 crc kubenswrapper[4925]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 02 10:57:04 crc kubenswrapper[4925]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.044182 4925 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071177 4925 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071208 4925 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071217 4925 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071225 4925 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071235 4925 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071243 4925 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071251 4925 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071259 4925 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071267 4925 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071274 4925 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071282 4925 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071290 4925 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071298 4925 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071309 4925 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071319 4925 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071327 4925 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071335 4925 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071344 4925 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071352 4925 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071359 4925 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071367 4925 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071391 4925 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071399 4925 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071407 4925 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071414 4925 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071422 4925 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071430 4925 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071438 4925 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071446 4925 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071453 4925 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071461 4925 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071468 4925 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071476 4925 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071483 4925 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071493 4925 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071504 4925 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071515 4925 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071525 4925 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071533 4925 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071542 4925 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071551 4925 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071562 4925 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071570 4925 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071578 4925 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071586 4925 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071594 4925 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071602 4925 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071609 4925 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071617 4925 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071624 4925 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071632 4925 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071640 4925 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071647 4925 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071655 4925 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071662 4925 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071669 4925 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071680 4925 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071700 4925 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071709 4925 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071718 4925 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071726 4925 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071734 4925 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071742 4925 feature_gate.go:330] unrecognized feature gate: Example Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071750 4925 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071757 4925 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071773 4925 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071781 4925 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071789 4925 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071796 4925 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071804 4925 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.071812 4925 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.071967 4925 flags.go:64] FLAG: --address="0.0.0.0" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.071983 4925 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072004 4925 flags.go:64] FLAG: --anonymous-auth="true" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072015 4925 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072026 4925 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072035 4925 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072050 4925 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072061 4925 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072071 4925 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072106 4925 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072116 4925 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072125 4925 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072135 4925 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072144 4925 flags.go:64] FLAG: --cgroup-root="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072153 4925 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072162 4925 flags.go:64] FLAG: --client-ca-file="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072171 4925 flags.go:64] FLAG: --cloud-config="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072180 4925 flags.go:64] FLAG: --cloud-provider="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072188 4925 flags.go:64] FLAG: --cluster-dns="[]" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072205 4925 flags.go:64] FLAG: --cluster-domain="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072214 4925 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072223 4925 flags.go:64] FLAG: --config-dir="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072244 4925 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072254 4925 flags.go:64] FLAG: --container-log-max-files="5" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072265 4925 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072274 4925 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072283 4925 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072293 4925 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072301 4925 flags.go:64] FLAG: --contention-profiling="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072310 4925 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072319 4925 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072328 4925 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072337 4925 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072348 4925 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072357 4925 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072366 4925 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072375 4925 flags.go:64] FLAG: --enable-load-reader="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072384 4925 flags.go:64] FLAG: --enable-server="true" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072393 4925 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072420 4925 flags.go:64] FLAG: --event-burst="100" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072430 4925 flags.go:64] FLAG: --event-qps="50" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072439 4925 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072448 4925 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072457 4925 flags.go:64] FLAG: --eviction-hard="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072468 4925 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072476 4925 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072485 4925 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072494 4925 flags.go:64] FLAG: --eviction-soft="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072504 4925 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072512 4925 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072521 4925 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072530 4925 flags.go:64] FLAG: --experimental-mounter-path="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072539 4925 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072547 4925 flags.go:64] FLAG: --fail-swap-on="true" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072557 4925 flags.go:64] FLAG: --feature-gates="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072567 4925 flags.go:64] FLAG: --file-check-frequency="20s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072576 4925 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072586 4925 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072606 4925 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072615 4925 flags.go:64] FLAG: --healthz-port="10248" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072624 4925 flags.go:64] FLAG: --help="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072633 4925 flags.go:64] FLAG: --hostname-override="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072642 4925 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072651 4925 flags.go:64] FLAG: --http-check-frequency="20s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072660 4925 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072669 4925 flags.go:64] FLAG: --image-credential-provider-config="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072678 4925 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072686 4925 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072696 4925 flags.go:64] FLAG: --image-service-endpoint="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072705 4925 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072714 4925 flags.go:64] FLAG: --kube-api-burst="100" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072723 4925 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072732 4925 flags.go:64] FLAG: --kube-api-qps="50" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072741 4925 flags.go:64] FLAG: --kube-reserved="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072750 4925 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072758 4925 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072767 4925 flags.go:64] FLAG: --kubelet-cgroups="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072776 4925 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072785 4925 flags.go:64] FLAG: --lock-file="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072794 4925 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072804 4925 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072812 4925 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072826 4925 flags.go:64] FLAG: --log-json-split-stream="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072835 4925 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072844 4925 flags.go:64] FLAG: --log-text-split-stream="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072853 4925 flags.go:64] FLAG: --logging-format="text" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072861 4925 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072871 4925 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072879 4925 flags.go:64] FLAG: --manifest-url="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072888 4925 flags.go:64] FLAG: --manifest-url-header="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072899 4925 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072908 4925 flags.go:64] FLAG: --max-open-files="1000000" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072918 4925 flags.go:64] FLAG: --max-pods="110" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072927 4925 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072947 4925 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072957 4925 flags.go:64] FLAG: --memory-manager-policy="None" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072965 4925 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072975 4925 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072983 4925 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.072992 4925 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073011 4925 flags.go:64] FLAG: --node-status-max-images="50" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073020 4925 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073029 4925 flags.go:64] FLAG: --oom-score-adj="-999" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073038 4925 flags.go:64] FLAG: --pod-cidr="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073047 4925 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073066 4925 flags.go:64] FLAG: --pod-manifest-path="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073099 4925 flags.go:64] FLAG: --pod-max-pids="-1" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073108 4925 flags.go:64] FLAG: --pods-per-core="0" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073117 4925 flags.go:64] FLAG: --port="10250" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073126 4925 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073135 4925 flags.go:64] FLAG: --provider-id="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073144 4925 flags.go:64] FLAG: --qos-reserved="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073153 4925 flags.go:64] FLAG: --read-only-port="10255" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073162 4925 flags.go:64] FLAG: --register-node="true" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073170 4925 flags.go:64] FLAG: --register-schedulable="true" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073187 4925 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073202 4925 flags.go:64] FLAG: --registry-burst="10" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073210 4925 flags.go:64] FLAG: --registry-qps="5" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073219 4925 flags.go:64] FLAG: --reserved-cpus="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073227 4925 flags.go:64] FLAG: --reserved-memory="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073238 4925 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073247 4925 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073256 4925 flags.go:64] FLAG: --rotate-certificates="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073265 4925 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073273 4925 flags.go:64] FLAG: --runonce="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073282 4925 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073291 4925 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073301 4925 flags.go:64] FLAG: --seccomp-default="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073309 4925 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073318 4925 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073348 4925 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073357 4925 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073367 4925 flags.go:64] FLAG: --storage-driver-password="root" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073376 4925 flags.go:64] FLAG: --storage-driver-secure="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073384 4925 flags.go:64] FLAG: --storage-driver-table="stats" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073393 4925 flags.go:64] FLAG: --storage-driver-user="root" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073403 4925 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073412 4925 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073421 4925 flags.go:64] FLAG: --system-cgroups="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073429 4925 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073443 4925 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073451 4925 flags.go:64] FLAG: --tls-cert-file="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073460 4925 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073482 4925 flags.go:64] FLAG: --tls-min-version="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073491 4925 flags.go:64] FLAG: --tls-private-key-file="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073499 4925 flags.go:64] FLAG: --topology-manager-policy="none" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073508 4925 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073520 4925 flags.go:64] FLAG: --topology-manager-scope="container" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073529 4925 flags.go:64] FLAG: --v="2" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073540 4925 flags.go:64] FLAG: --version="false" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073551 4925 flags.go:64] FLAG: --vmodule="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073561 4925 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.073571 4925 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.073862 4925 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.073874 4925 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.073883 4925 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.073892 4925 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.073901 4925 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.073908 4925 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.073917 4925 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.073925 4925 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.073933 4925 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.073943 4925 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.073953 4925 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.073963 4925 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.073973 4925 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.073992 4925 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074001 4925 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074009 4925 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074016 4925 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074024 4925 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074031 4925 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074039 4925 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074047 4925 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074054 4925 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074062 4925 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074069 4925 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074101 4925 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074110 4925 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074121 4925 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074129 4925 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074136 4925 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074144 4925 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074151 4925 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074160 4925 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074167 4925 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074174 4925 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074182 4925 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074190 4925 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074198 4925 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074205 4925 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074213 4925 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074221 4925 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074228 4925 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074236 4925 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074244 4925 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074256 4925 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074265 4925 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074275 4925 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074285 4925 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074294 4925 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074303 4925 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074323 4925 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074331 4925 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074340 4925 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074348 4925 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074355 4925 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074365 4925 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074374 4925 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074382 4925 feature_gate.go:330] unrecognized feature gate: Example Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074391 4925 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074403 4925 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074410 4925 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074418 4925 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074426 4925 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074434 4925 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074441 4925 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074449 4925 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074456 4925 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074464 4925 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074472 4925 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074480 4925 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074498 4925 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.074506 4925 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.074519 4925 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.102612 4925 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.102655 4925 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102770 4925 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102781 4925 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102790 4925 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102798 4925 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102807 4925 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102814 4925 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102822 4925 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102830 4925 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102838 4925 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102846 4925 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102854 4925 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102862 4925 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102880 4925 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102888 4925 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102896 4925 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102904 4925 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102911 4925 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102919 4925 feature_gate.go:330] unrecognized feature gate: Example Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102926 4925 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102934 4925 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102942 4925 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102949 4925 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102957 4925 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102964 4925 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102972 4925 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102980 4925 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102988 4925 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.102996 4925 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103004 4925 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103012 4925 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103019 4925 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103027 4925 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103034 4925 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103043 4925 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103050 4925 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103058 4925 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103065 4925 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103109 4925 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103120 4925 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103129 4925 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103138 4925 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103145 4925 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103153 4925 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103162 4925 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103173 4925 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103187 4925 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103196 4925 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103206 4925 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103216 4925 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103225 4925 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103235 4925 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103243 4925 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103252 4925 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103262 4925 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103272 4925 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103282 4925 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103293 4925 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103302 4925 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103310 4925 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103318 4925 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103326 4925 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103334 4925 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103342 4925 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103349 4925 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103357 4925 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103365 4925 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103373 4925 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103380 4925 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103388 4925 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103396 4925 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103404 4925 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.103417 4925 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103628 4925 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103643 4925 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103653 4925 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103664 4925 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103674 4925 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103683 4925 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103691 4925 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103699 4925 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103706 4925 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103714 4925 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103722 4925 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103732 4925 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103742 4925 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103750 4925 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103761 4925 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103769 4925 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103777 4925 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103785 4925 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103793 4925 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103802 4925 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103810 4925 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103818 4925 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103826 4925 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103834 4925 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103841 4925 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103849 4925 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103857 4925 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103865 4925 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103872 4925 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103879 4925 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103888 4925 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103895 4925 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103903 4925 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103910 4925 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103918 4925 feature_gate.go:330] unrecognized feature gate: Example Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103926 4925 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103934 4925 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103944 4925 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103952 4925 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103960 4925 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103968 4925 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103977 4925 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103984 4925 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103992 4925 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.103999 4925 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104009 4925 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104019 4925 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104027 4925 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104036 4925 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104046 4925 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104056 4925 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104065 4925 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104111 4925 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104122 4925 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104131 4925 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104139 4925 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104147 4925 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104155 4925 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104163 4925 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104171 4925 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104179 4925 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104187 4925 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104194 4925 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104202 4925 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104209 4925 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104217 4925 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104224 4925 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104232 4925 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104240 4925 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104248 4925 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.104256 4925 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.104269 4925 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.105524 4925 server.go:940] "Client rotation is on, will bootstrap in background" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.136510 4925 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.136624 4925 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.139350 4925 server.go:997] "Starting client certificate rotation" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.139383 4925 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.146093 4925 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-25 05:43:49.908059354 +0000 UTC Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.146161 4925 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.243191 4925 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 10:57:04 crc kubenswrapper[4925]: E0202 10:57:04.265207 4925 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.299655 4925 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.342012 4925 log.go:25] "Validated CRI v1 runtime API" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.492838 4925 log.go:25] "Validated CRI v1 image API" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.495304 4925 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.536846 4925 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-02-10-52-18-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.536879 4925 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.568949 4925 manager.go:217] Machine: {Timestamp:2026-02-02 10:57:04.562998997 +0000 UTC m=+1.567248059 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c5eed54a-6e55-454f-8465-b3753cd45b28 BootID:d1a35f2f-5b56-42fa-a9f8-72c174fa2172 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:0a:15:30 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:0a:15:30 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:98:3d:09 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:29:7b:9a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d2:d3:2a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:23:1e:b5 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:a6:0f:76 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6a:c3:d3:86:f2:4b Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:f5:49:ad:ac:bc Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.570470 4925 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.570810 4925 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.571381 4925 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.571702 4925 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.571764 4925 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.572060 4925 topology_manager.go:138] "Creating topology manager with none policy" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.572113 4925 container_manager_linux.go:303] "Creating device plugin manager" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.572731 4925 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.573596 4925 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.574427 4925 state_mem.go:36] "Initialized new in-memory state store" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.574919 4925 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.579324 4925 kubelet.go:418] "Attempting to sync node with API server" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.579358 4925 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.579383 4925 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.579404 4925 kubelet.go:324] "Adding apiserver pod source" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.579420 4925 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.586650 4925 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.587925 4925 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.588172 4925 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:04 crc kubenswrapper[4925]: E0202 10:57:04.588359 4925 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.588725 4925 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:04 crc kubenswrapper[4925]: E0202 10:57:04.588817 4925 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.590421 4925 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.594314 4925 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.594370 4925 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.594391 4925 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.594410 4925 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.594436 4925 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.594453 4925 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.594468 4925 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.594494 4925 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.594512 4925 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.594528 4925 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.594575 4925 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.594596 4925 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.595661 4925 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.596448 4925 server.go:1280] "Started kubelet" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.596740 4925 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.596695 4925 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.597212 4925 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.597463 4925 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 02 10:57:04 crc systemd[1]: Started Kubernetes Kubelet. Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.605856 4925 server.go:460] "Adding debug handlers to kubelet server" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.609341 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.609411 4925 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.609599 4925 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.609646 4925 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.609531 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 13:36:19.960877633 +0000 UTC Feb 02 10:57:04 crc kubenswrapper[4925]: E0202 10:57:04.609699 4925 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.609933 4925 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.610510 4925 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:04 crc kubenswrapper[4925]: E0202 10:57:04.610570 4925 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.610869 4925 factory.go:55] Registering systemd factory Feb 02 10:57:04 crc kubenswrapper[4925]: E0202 10:57:04.610990 4925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="200ms" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.611558 4925 factory.go:221] Registration of the systemd container factory successfully Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.611818 4925 factory.go:153] Registering CRI-O factory Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.611834 4925 factory.go:221] Registration of the crio container factory successfully Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.611899 4925 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.611929 4925 factory.go:103] Registering Raw factory Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.611953 4925 manager.go:1196] Started watching for new ooms in manager Feb 02 10:57:04 crc kubenswrapper[4925]: E0202 10:57:04.611324 4925 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.195:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189068bf58ea7219 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 10:57:04.596406809 +0000 UTC m=+1.600655801,LastTimestamp:2026-02-02 10:57:04.596406809 +0000 UTC m=+1.600655801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.614488 4925 manager.go:319] Starting recovery of all containers Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632616 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632669 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632680 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632691 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632701 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632709 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632719 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632729 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632742 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632755 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632767 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632778 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632786 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632796 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632805 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632814 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632824 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.632832 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.634408 4925 manager.go:324] Recovery completed Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637033 4925 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637103 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637123 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637144 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637159 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637174 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637189 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637205 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637219 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637240 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637255 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637270 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637283 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637301 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637318 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637331 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637386 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637402 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637418 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637433 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637446 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637460 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637474 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637488 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637503 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637517 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637531 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637544 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637558 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637572 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637587 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637600 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637612 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637625 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637639 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637657 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637671 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637685 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637698 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637711 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637725 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637741 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637754 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637766 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637776 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637818 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637836 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637848 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637859 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637873 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637886 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637897 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637909 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637928 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637941 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637953 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637966 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637980 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.637993 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638004 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638017 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638027 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638041 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638055 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638069 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638118 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638134 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638145 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638157 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638170 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638183 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638196 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638208 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638219 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638231 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638243 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638254 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638266 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638281 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638294 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638305 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638318 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638329 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638340 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638353 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638365 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638377 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638398 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638411 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638424 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638472 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638484 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638494 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638507 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638518 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638530 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638543 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638555 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638568 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638578 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638587 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638595 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638603 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638612 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638621 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638630 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638640 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638648 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638658 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638666 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638677 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638686 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638695 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638704 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638713 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638722 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638731 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638741 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638749 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638759 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638767 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638776 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638784 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638793 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638802 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638810 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638819 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638827 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638836 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638846 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638857 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638867 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638877 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638888 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638896 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638903 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638913 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638920 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638928 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638936 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638945 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638955 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638967 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638978 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.638990 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639000 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639010 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639021 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639030 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639043 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639053 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639065 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639094 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639106 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639117 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639127 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639138 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639149 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639162 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639171 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639180 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639193 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639203 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639215 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639227 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639238 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639249 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639258 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639266 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639275 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639286 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639297 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639309 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639321 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639334 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639346 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639356 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639367 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639378 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639390 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639403 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639415 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639426 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639437 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639449 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639461 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639472 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639483 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639494 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639506 4925 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639518 4925 reconstruct.go:97] "Volume reconstruction finished" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.639526 4925 reconciler.go:26] "Reconciler: start to sync state" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.646576 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.648286 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.648329 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.648341 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.649111 4925 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.649125 4925 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.649156 4925 state_mem.go:36] "Initialized new in-memory state store" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.660062 4925 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.662967 4925 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.663019 4925 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 02 10:57:04 crc kubenswrapper[4925]: I0202 10:57:04.663048 4925 kubelet.go:2335] "Starting kubelet main sync loop" Feb 02 10:57:04 crc kubenswrapper[4925]: E0202 10:57:04.663125 4925 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 02 10:57:04 crc kubenswrapper[4925]: W0202 10:57:04.663955 4925 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:04 crc kubenswrapper[4925]: E0202 10:57:04.664034 4925 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:04 crc kubenswrapper[4925]: E0202 10:57:04.710553 4925 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 10:57:04 crc kubenswrapper[4925]: E0202 10:57:04.764250 4925 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 02 10:57:04 crc kubenswrapper[4925]: E0202 10:57:04.810656 4925 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 10:57:04 crc kubenswrapper[4925]: E0202 10:57:04.812633 4925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="400ms" Feb 02 10:57:04 crc kubenswrapper[4925]: E0202 10:57:04.911594 4925 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 10:57:04 crc kubenswrapper[4925]: E0202 10:57:04.964396 4925 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 02 10:57:05 crc kubenswrapper[4925]: E0202 10:57:05.011906 4925 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.073466 4925 policy_none.go:49] "None policy: Start" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.075062 4925 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.075110 4925 state_mem.go:35] "Initializing new in-memory state store" Feb 02 10:57:05 crc kubenswrapper[4925]: E0202 10:57:05.112555 4925 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.156925 4925 manager.go:334] "Starting Device Plugin manager" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.157029 4925 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.157051 4925 server.go:79] "Starting device plugin registration server" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.157827 4925 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.157861 4925 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.158207 4925 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.158384 4925 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.158399 4925 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 02 10:57:05 crc kubenswrapper[4925]: E0202 10:57:05.169220 4925 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 10:57:05 crc kubenswrapper[4925]: E0202 10:57:05.213881 4925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="800ms" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.258750 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.260884 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.260970 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.260998 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.261123 4925 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:57:05 crc kubenswrapper[4925]: E0202 10:57:05.261859 4925 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.365338 4925 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.365586 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.367856 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.367930 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.367955 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.368228 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.368602 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.368680 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.369837 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.369887 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.369902 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.370098 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.370253 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.370331 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.371347 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.371395 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.371419 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.371460 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.371491 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.371505 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.371645 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.371709 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.371747 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.372064 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.372153 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.372185 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.372967 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.373063 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.373110 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.373105 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.373218 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.373234 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.373374 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.373569 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.373658 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.374620 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.374668 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.374690 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.375028 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.375031 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.375146 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.375066 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.375205 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.376371 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.376411 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.376426 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.452441 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.452490 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.452514 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.452537 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.452634 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.452696 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.452750 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.452811 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.452858 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.452896 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.452930 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.452999 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.453028 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.453104 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.453152 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.462767 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.464195 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.464297 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.464318 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.464357 4925 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:57:05 crc kubenswrapper[4925]: E0202 10:57:05.465228 4925 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Feb 02 10:57:05 crc kubenswrapper[4925]: W0202 10:57:05.484101 4925 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:05 crc kubenswrapper[4925]: E0202 10:57:05.484219 4925 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:05 crc kubenswrapper[4925]: W0202 10:57:05.537864 4925 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:05 crc kubenswrapper[4925]: E0202 10:57:05.537953 4925 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554526 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554578 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554602 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554623 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554642 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554661 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554679 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554697 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554722 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554746 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554765 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554785 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554791 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554850 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554832 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554754 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554930 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554872 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554877 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554872 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554901 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554968 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554917 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554899 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554912 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.554804 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.555204 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.555237 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.555277 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.555390 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.598956 4925 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.610042 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:04:03.930597431 +0000 UTC Feb 02 10:57:05 crc kubenswrapper[4925]: W0202 10:57:05.644734 4925 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:05 crc kubenswrapper[4925]: E0202 10:57:05.644821 4925 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.713147 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: W0202 10:57:05.719934 4925 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:05 crc kubenswrapper[4925]: E0202 10:57:05.720044 4925 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.745875 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.755163 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.761002 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.774318 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.865872 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.867256 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.867316 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.867329 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:05 crc kubenswrapper[4925]: I0202 10:57:05.867357 4925 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:57:05 crc kubenswrapper[4925]: E0202 10:57:05.867913 4925 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Feb 02 10:57:06 crc kubenswrapper[4925]: E0202 10:57:06.015619 4925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="1.6s" Feb 02 10:57:06 crc kubenswrapper[4925]: I0202 10:57:06.384817 4925 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 10:57:06 crc kubenswrapper[4925]: E0202 10:57:06.386220 4925 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:06 crc kubenswrapper[4925]: I0202 10:57:06.598313 4925 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:06 crc kubenswrapper[4925]: I0202 10:57:06.610251 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:26:50.492738913 +0000 UTC Feb 02 10:57:06 crc kubenswrapper[4925]: I0202 10:57:06.668763 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:06 crc kubenswrapper[4925]: I0202 10:57:06.670629 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:06 crc kubenswrapper[4925]: I0202 10:57:06.670698 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:06 crc kubenswrapper[4925]: I0202 10:57:06.670721 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:06 crc kubenswrapper[4925]: I0202 10:57:06.670763 4925 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:57:06 crc kubenswrapper[4925]: E0202 10:57:06.671386 4925 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Feb 02 10:57:07 crc kubenswrapper[4925]: I0202 10:57:07.598439 4925 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:07 crc kubenswrapper[4925]: I0202 10:57:07.610764 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 19:55:03.74480061 +0000 UTC Feb 02 10:57:07 crc kubenswrapper[4925]: W0202 10:57:07.612734 4925 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:07 crc kubenswrapper[4925]: E0202 10:57:07.612840 4925 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:07 crc kubenswrapper[4925]: E0202 10:57:07.616773 4925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="3.2s" Feb 02 10:57:07 crc kubenswrapper[4925]: I0202 10:57:07.672292 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"977cc0e06b0d7616bdac5bbe2c1f255f3d0509d3746c2722140ac4e5a0107346"} Feb 02 10:57:07 crc kubenswrapper[4925]: I0202 10:57:07.673390 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a09392c9ba61914dba6928f4fae0c03ed884d0e17844eb8a401e8f3ff4c2524e"} Feb 02 10:57:07 crc kubenswrapper[4925]: I0202 10:57:07.674597 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bb0233530f9d36f7ceff8c5189ea156b859762e453692bd83f7cfdc96321aca6"} Feb 02 10:57:07 crc kubenswrapper[4925]: I0202 10:57:07.675463 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"06b625484085cf4828c1f10911a0da16bfac0ef859c119e46c84a192f75cc523"} Feb 02 10:57:07 crc kubenswrapper[4925]: I0202 10:57:07.676494 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7bea25813cf7f31eb765a677de7477e2c757373b41ea3971b43b1e4bd1060eb7"} Feb 02 10:57:08 crc kubenswrapper[4925]: W0202 10:57:08.158218 4925 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:08 crc kubenswrapper[4925]: E0202 10:57:08.158575 4925 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:08 crc kubenswrapper[4925]: I0202 10:57:08.272481 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:08 crc kubenswrapper[4925]: I0202 10:57:08.274518 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:08 crc kubenswrapper[4925]: I0202 10:57:08.274547 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:08 crc kubenswrapper[4925]: I0202 10:57:08.274556 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:08 crc kubenswrapper[4925]: I0202 10:57:08.274575 4925 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:57:08 crc kubenswrapper[4925]: E0202 10:57:08.275055 4925 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Feb 02 10:57:08 crc kubenswrapper[4925]: W0202 10:57:08.312658 4925 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:08 crc kubenswrapper[4925]: E0202 10:57:08.312736 4925 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:08 crc kubenswrapper[4925]: I0202 10:57:08.599129 4925 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:08 crc kubenswrapper[4925]: I0202 10:57:08.611499 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 19:57:39.284353541 +0000 UTC Feb 02 10:57:08 crc kubenswrapper[4925]: W0202 10:57:08.819172 4925 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:08 crc kubenswrapper[4925]: E0202 10:57:08.819295 4925 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:09 crc kubenswrapper[4925]: I0202 10:57:09.598253 4925 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:09 crc kubenswrapper[4925]: I0202 10:57:09.612449 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 02:28:48.741482974 +0000 UTC Feb 02 10:57:10 crc kubenswrapper[4925]: I0202 10:57:10.449009 4925 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 10:57:10 crc kubenswrapper[4925]: E0202 10:57:10.449999 4925 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:10 crc kubenswrapper[4925]: I0202 10:57:10.598246 4925 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:10 crc kubenswrapper[4925]: I0202 10:57:10.612640 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 19:49:55.56708111 +0000 UTC Feb 02 10:57:10 crc kubenswrapper[4925]: I0202 10:57:10.687371 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7"} Feb 02 10:57:10 crc kubenswrapper[4925]: E0202 10:57:10.725948 4925 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.195:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189068bf58ea7219 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 10:57:04.596406809 +0000 UTC m=+1.600655801,LastTimestamp:2026-02-02 10:57:04.596406809 +0000 UTC m=+1.600655801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 10:57:10 crc kubenswrapper[4925]: E0202 10:57:10.818131 4925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="6.4s" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.476175 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.478690 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.478761 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.478777 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.478816 4925 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:57:11 crc kubenswrapper[4925]: E0202 10:57:11.479672 4925 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.599070 4925 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.613420 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 16:42:01.902985954 +0000 UTC Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.697134 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18"} Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.699173 4925 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c" exitCode=0 Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.699344 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.699921 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c"} Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.700535 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.700588 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.700614 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.702244 4925 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24" exitCode=0 Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.702386 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24"} Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.702446 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.704226 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.704274 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.704297 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.706134 4925 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945" exitCode=0 Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.706239 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945"} Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.706278 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.707476 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.707517 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.707532 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.709063 4925 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="725dc27e3ea93d5315830738921a7c14e25b046f99505253cda2a62b64c483be" exitCode=0 Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.709154 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"725dc27e3ea93d5315830738921a7c14e25b046f99505253cda2a62b64c483be"} Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.710108 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.711751 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.711791 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:11 crc kubenswrapper[4925]: I0202 10:57:11.711809 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:12 crc kubenswrapper[4925]: W0202 10:57:12.338836 4925 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:12 crc kubenswrapper[4925]: E0202 10:57:12.339294 4925 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:12 crc kubenswrapper[4925]: I0202 10:57:12.599590 4925 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:12 crc kubenswrapper[4925]: I0202 10:57:12.614342 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 17:52:22.00645006 +0000 UTC Feb 02 10:57:12 crc kubenswrapper[4925]: I0202 10:57:12.716095 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924"} Feb 02 10:57:12 crc kubenswrapper[4925]: I0202 10:57:12.719545 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6"} Feb 02 10:57:12 crc kubenswrapper[4925]: I0202 10:57:12.722002 4925 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8" exitCode=0 Feb 02 10:57:12 crc kubenswrapper[4925]: I0202 10:57:12.722057 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8"} Feb 02 10:57:12 crc kubenswrapper[4925]: I0202 10:57:12.724181 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:12 crc kubenswrapper[4925]: I0202 10:57:12.724630 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fff7ae920a42d3188ef7bcd99aa3c4bd344f55fd90a9ae9b95411db6b6d30de8"} Feb 02 10:57:12 crc kubenswrapper[4925]: I0202 10:57:12.725026 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:12 crc kubenswrapper[4925]: I0202 10:57:12.725049 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:12 crc kubenswrapper[4925]: I0202 10:57:12.725058 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:12 crc kubenswrapper[4925]: W0202 10:57:12.792408 4925 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:12 crc kubenswrapper[4925]: E0202 10:57:12.792530 4925 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:12 crc kubenswrapper[4925]: W0202 10:57:12.970217 4925 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:12 crc kubenswrapper[4925]: E0202 10:57:12.970406 4925 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.598137 4925 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.614811 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 10:12:53.812934514 +0000 UTC Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.727649 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8ce4cd6d19d406e202c1d4b56b6368afe79f5308cb92de982830d65a94cf66aa"} Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.727745 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.728511 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.728532 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.728540 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.730747 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb"} Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.730820 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.731407 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.731472 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.731480 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.733946 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7a6be1a1c8fa8650db2277393fecfd53a6d3dac682ec792eddf1aea329fcf56b"} Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.733969 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.733984 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"139493bf9b644468f00ef7346d25ede753332f6401fb46c8ea3d5118de8fbdaf"} Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.734624 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.734650 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.734659 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.737100 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b33cecca7fdbdb4854caa42f471d7dc6756427abe94ab2e71a8b8b0c59973c79"} Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.737125 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.737131 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e"} Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.737147 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b"} Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.737158 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921"} Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.737108 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.737818 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.737849 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.737857 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.738099 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.738115 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:13 crc kubenswrapper[4925]: I0202 10:57:13.738147 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:13 crc kubenswrapper[4925]: W0202 10:57:13.786569 4925 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Feb 02 10:57:13 crc kubenswrapper[4925]: E0202 10:57:13.786658 4925 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.615846 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:28:11.958923326 +0000 UTC Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.741656 4925 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d" exitCode=0 Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.741783 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d"} Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.741864 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.741891 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.741868 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.741864 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.741986 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.742666 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.742961 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.743012 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.743031 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.743255 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.743295 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.743312 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.743323 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.743349 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.743359 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.743415 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.743466 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.743483 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:14 crc kubenswrapper[4925]: I0202 10:57:14.779245 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:57:15 crc kubenswrapper[4925]: E0202 10:57:15.170067 4925 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.616557 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 21:40:57.045047488 +0000 UTC Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.748198 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.748208 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.748155 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319"} Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.748358 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa"} Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.748983 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.749279 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.749313 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.749326 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.749360 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.749388 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.749402 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.750502 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.750547 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.750563 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.807845 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.818666 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:57:15 crc kubenswrapper[4925]: I0202 10:57:15.907790 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:57:16 crc kubenswrapper[4925]: I0202 10:57:16.616665 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 16:31:31.751271775 +0000 UTC Feb 02 10:57:16 crc kubenswrapper[4925]: I0202 10:57:16.758559 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57"} Feb 02 10:57:16 crc kubenswrapper[4925]: I0202 10:57:16.758617 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:16 crc kubenswrapper[4925]: I0202 10:57:16.758650 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d"} Feb 02 10:57:16 crc kubenswrapper[4925]: I0202 10:57:16.758686 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23"} Feb 02 10:57:16 crc kubenswrapper[4925]: I0202 10:57:16.758661 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:16 crc kubenswrapper[4925]: I0202 10:57:16.759979 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:16 crc kubenswrapper[4925]: I0202 10:57:16.760044 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:16 crc kubenswrapper[4925]: I0202 10:57:16.760068 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:16 crc kubenswrapper[4925]: I0202 10:57:16.760321 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:16 crc kubenswrapper[4925]: I0202 10:57:16.760376 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:16 crc kubenswrapper[4925]: I0202 10:57:16.760411 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:16 crc kubenswrapper[4925]: I0202 10:57:16.888738 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:16 crc kubenswrapper[4925]: I0202 10:57:16.888931 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:16 crc kubenswrapper[4925]: I0202 10:57:16.889918 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:16 crc kubenswrapper[4925]: I0202 10:57:16.889944 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:16 crc kubenswrapper[4925]: I0202 10:57:16.889953 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.199984 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.567062 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.617120 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 19:01:37.710638869 +0000 UTC Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.761596 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.761685 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.761765 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.762632 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.762675 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.762695 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.763202 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.763224 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.763234 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.763243 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.763291 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.763313 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.880413 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.882434 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.882466 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.882476 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:17 crc kubenswrapper[4925]: I0202 10:57:17.882505 4925 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:57:18 crc kubenswrapper[4925]: I0202 10:57:18.374961 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:57:18 crc kubenswrapper[4925]: I0202 10:57:18.617846 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:04:55.630964233 +0000 UTC Feb 02 10:57:18 crc kubenswrapper[4925]: I0202 10:57:18.761189 4925 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 10:57:18 crc kubenswrapper[4925]: I0202 10:57:18.764473 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:18 crc kubenswrapper[4925]: I0202 10:57:18.764596 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:18 crc kubenswrapper[4925]: I0202 10:57:18.765466 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:18 crc kubenswrapper[4925]: I0202 10:57:18.765500 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:18 crc kubenswrapper[4925]: I0202 10:57:18.765511 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:18 crc kubenswrapper[4925]: I0202 10:57:18.765676 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:18 crc kubenswrapper[4925]: I0202 10:57:18.765704 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:18 crc kubenswrapper[4925]: I0202 10:57:18.765718 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:18 crc kubenswrapper[4925]: I0202 10:57:18.908664 4925 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:57:18 crc kubenswrapper[4925]: I0202 10:57:18.908787 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 10:57:19 crc kubenswrapper[4925]: I0202 10:57:19.618771 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 21:32:52.42451885 +0000 UTC Feb 02 10:57:20 crc kubenswrapper[4925]: I0202 10:57:20.619596 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:38:10.544431021 +0000 UTC Feb 02 10:57:21 crc kubenswrapper[4925]: I0202 10:57:21.620717 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 09:08:30.342136205 +0000 UTC Feb 02 10:57:22 crc kubenswrapper[4925]: I0202 10:57:22.621411 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:26:26.087602628 +0000 UTC Feb 02 10:57:23 crc kubenswrapper[4925]: I0202 10:57:23.240710 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 02 10:57:23 crc kubenswrapper[4925]: I0202 10:57:23.241441 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:23 crc kubenswrapper[4925]: I0202 10:57:23.243279 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:23 crc kubenswrapper[4925]: I0202 10:57:23.243544 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:23 crc kubenswrapper[4925]: I0202 10:57:23.243756 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:23 crc kubenswrapper[4925]: I0202 10:57:23.622154 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:40:06.118911055 +0000 UTC Feb 02 10:57:24 crc kubenswrapper[4925]: I0202 10:57:24.599311 4925 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 02 10:57:24 crc kubenswrapper[4925]: I0202 10:57:24.622306 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 19:29:50.92914672 +0000 UTC Feb 02 10:57:24 crc kubenswrapper[4925]: I0202 10:57:24.784033 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:57:24 crc kubenswrapper[4925]: I0202 10:57:24.784204 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:24 crc kubenswrapper[4925]: I0202 10:57:24.784993 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:24 crc kubenswrapper[4925]: I0202 10:57:24.785028 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:24 crc kubenswrapper[4925]: I0202 10:57:24.785038 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:24 crc kubenswrapper[4925]: I0202 10:57:24.789586 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 10:57:24 crc kubenswrapper[4925]: I0202 10:57:24.792064 4925 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b33cecca7fdbdb4854caa42f471d7dc6756427abe94ab2e71a8b8b0c59973c79" exitCode=255 Feb 02 10:57:24 crc kubenswrapper[4925]: I0202 10:57:24.792120 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b33cecca7fdbdb4854caa42f471d7dc6756427abe94ab2e71a8b8b0c59973c79"} Feb 02 10:57:24 crc kubenswrapper[4925]: I0202 10:57:24.792294 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:24 crc kubenswrapper[4925]: I0202 10:57:24.792970 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:24 crc kubenswrapper[4925]: I0202 10:57:24.792996 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:24 crc kubenswrapper[4925]: I0202 10:57:24.793005 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:24 crc kubenswrapper[4925]: I0202 10:57:24.793452 4925 scope.go:117] "RemoveContainer" containerID="b33cecca7fdbdb4854caa42f471d7dc6756427abe94ab2e71a8b8b0c59973c79" Feb 02 10:57:25 crc kubenswrapper[4925]: E0202 10:57:25.170297 4925 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 10:57:25 crc kubenswrapper[4925]: I0202 10:57:25.623460 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:12:28.551968011 +0000 UTC Feb 02 10:57:25 crc kubenswrapper[4925]: I0202 10:57:25.769004 4925 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 10:57:25 crc kubenswrapper[4925]: I0202 10:57:25.769062 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 10:57:25 crc kubenswrapper[4925]: I0202 10:57:25.773575 4925 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 10:57:25 crc kubenswrapper[4925]: I0202 10:57:25.773615 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 10:57:25 crc kubenswrapper[4925]: I0202 10:57:25.795449 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 10:57:25 crc kubenswrapper[4925]: I0202 10:57:25.797023 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324"} Feb 02 10:57:25 crc kubenswrapper[4925]: I0202 10:57:25.797187 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:25 crc kubenswrapper[4925]: I0202 10:57:25.798145 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:25 crc kubenswrapper[4925]: I0202 10:57:25.798171 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:25 crc kubenswrapper[4925]: I0202 10:57:25.798179 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:26 crc kubenswrapper[4925]: I0202 10:57:26.410424 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:26 crc kubenswrapper[4925]: I0202 10:57:26.624145 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:39:11.456978636 +0000 UTC Feb 02 10:57:26 crc kubenswrapper[4925]: I0202 10:57:26.800343 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:26 crc kubenswrapper[4925]: I0202 10:57:26.801477 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:26 crc kubenswrapper[4925]: I0202 10:57:26.801523 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:26 crc kubenswrapper[4925]: I0202 10:57:26.801536 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:27 crc kubenswrapper[4925]: I0202 10:57:27.578776 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:27 crc kubenswrapper[4925]: I0202 10:57:27.625205 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 13:02:34.959716563 +0000 UTC Feb 02 10:57:27 crc kubenswrapper[4925]: I0202 10:57:27.804434 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:27 crc kubenswrapper[4925]: I0202 10:57:27.806381 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:27 crc kubenswrapper[4925]: I0202 10:57:27.806452 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:27 crc kubenswrapper[4925]: I0202 10:57:27.806473 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:27 crc kubenswrapper[4925]: I0202 10:57:27.813239 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:28 crc kubenswrapper[4925]: I0202 10:57:28.626303 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 02:48:41.866829507 +0000 UTC Feb 02 10:57:28 crc kubenswrapper[4925]: I0202 10:57:28.806690 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:28 crc kubenswrapper[4925]: I0202 10:57:28.807380 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:28 crc kubenswrapper[4925]: I0202 10:57:28.807483 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:28 crc kubenswrapper[4925]: I0202 10:57:28.807553 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:28 crc kubenswrapper[4925]: I0202 10:57:28.908803 4925 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:57:28 crc kubenswrapper[4925]: I0202 10:57:28.908958 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 10:57:29 crc kubenswrapper[4925]: I0202 10:57:29.627581 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 16:11:54.269350999 +0000 UTC Feb 02 10:57:30 crc kubenswrapper[4925]: I0202 10:57:30.628336 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 19:16:24.055464288 +0000 UTC Feb 02 10:57:30 crc kubenswrapper[4925]: E0202 10:57:30.772187 4925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="7s" Feb 02 10:57:30 crc kubenswrapper[4925]: I0202 10:57:30.773851 4925 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 10:57:30 crc kubenswrapper[4925]: I0202 10:57:30.774208 4925 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 10:57:30 crc kubenswrapper[4925]: I0202 10:57:30.775140 4925 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 02 10:57:30 crc kubenswrapper[4925]: I0202 10:57:30.775218 4925 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 10:57:30 crc kubenswrapper[4925]: E0202 10:57:30.776435 4925 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 02 10:57:30 crc kubenswrapper[4925]: I0202 10:57:30.777351 4925 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 10:57:30 crc kubenswrapper[4925]: I0202 10:57:30.778772 4925 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 10:57:30 crc kubenswrapper[4925]: I0202 10:57:30.801576 4925 csr.go:261] certificate signing request csr-dxqbm is approved, waiting to be issued Feb 02 10:57:30 crc kubenswrapper[4925]: I0202 10:57:30.811005 4925 csr.go:257] certificate signing request csr-dxqbm is issued Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.597357 4925 apiserver.go:52] "Watching apiserver" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.618590 4925 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.618800 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.619089 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:31 crc kubenswrapper[4925]: E0202 10:57:31.619137 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.619332 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.619439 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.619437 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:57:31 crc kubenswrapper[4925]: E0202 10:57:31.619523 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.619340 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.619613 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:31 crc kubenswrapper[4925]: E0202 10:57:31.620100 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.622003 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.622247 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.622438 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.622503 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.622619 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.622762 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.622888 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.623012 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.623135 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.628428 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:19:27.712998284 +0000 UTC Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.650034 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.678881 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.705009 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.710748 4925 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.728620 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.753472 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.763928 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.772720 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780117 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780155 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780172 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780190 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780207 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780225 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780241 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780258 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780274 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780288 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780303 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780324 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780340 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780354 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780368 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780389 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780403 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780418 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780435 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780448 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780461 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780476 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780534 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780552 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780567 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780582 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780597 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780610 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780625 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780641 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780637 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780657 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780753 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780782 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780814 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780818 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780837 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780862 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780885 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780890 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780917 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780939 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780963 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780988 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.780990 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781014 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781029 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781039 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781065 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781111 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781134 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781147 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781170 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781180 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781197 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781223 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781245 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781268 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781291 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781313 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781333 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781354 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781370 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781378 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781399 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781421 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781441 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781465 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781485 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781506 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781527 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781548 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781569 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781589 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781610 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781631 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781652 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781680 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781703 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781725 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781747 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781770 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781792 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781816 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781837 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781859 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781882 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781904 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781925 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781946 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781972 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781994 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782016 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782040 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782063 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782107 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782130 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782153 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782176 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782199 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782220 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782242 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782263 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782284 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782305 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782331 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782354 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782377 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782399 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782422 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782443 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782466 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782488 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782513 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782536 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782556 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782586 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782609 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782629 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782654 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782675 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782698 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782728 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782762 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782785 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782806 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782827 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782849 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782877 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782902 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782924 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782950 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782974 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782995 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783016 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783037 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783060 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783101 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783126 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783149 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783172 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783195 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783217 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783240 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783263 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783288 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783311 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783359 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783384 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783408 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783435 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783463 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783486 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783665 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783694 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783721 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783748 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783772 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783797 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783822 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783848 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783873 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783898 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783917 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783939 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783967 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783988 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.784007 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.784025 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.784044 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.784062 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.785641 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.785681 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.785705 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781507 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.785731 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781534 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781745 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.786957 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.787273 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788487 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788540 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788570 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788601 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788630 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788659 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788685 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788715 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788747 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788773 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788801 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788831 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788856 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788932 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788964 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788997 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789032 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789139 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789179 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789204 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789232 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789262 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789292 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789317 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789346 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789373 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789439 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789928 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789963 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789996 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790022 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790046 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790070 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790110 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790134 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790158 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790181 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790200 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790220 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790242 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790307 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790320 4925 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790331 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790343 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790357 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790367 4925 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790378 4925 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790392 4925 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790407 4925 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790419 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790301 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781758 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781867 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781883 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.781980 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782032 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782167 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782297 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782483 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782620 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782623 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782746 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782778 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.782910 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783029 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783063 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783164 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783214 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783282 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783358 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783440 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783557 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783675 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783676 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.791308 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783861 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783927 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.783979 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.784046 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.784175 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.784352 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.784504 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.784693 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.784867 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.785166 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.785239 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.791542 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.785443 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.785474 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.786877 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.787011 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.787332 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788199 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788808 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.788975 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789519 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789674 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789891 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789906 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.789908 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790109 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790545 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790543 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790553 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.790707 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.791818 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.791975 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.792156 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.792260 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.792317 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.792444 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.792497 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.792582 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.792673 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.792778 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.792840 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.792987 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.793212 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.793376 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.794494 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.794603 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.794814 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.795065 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.795206 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.795215 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.795303 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.795485 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.795500 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.795602 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.796272 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.796338 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.796381 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.796528 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.798249 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.798397 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.798812 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.798830 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.798899 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.799054 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.799188 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.799307 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.799383 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.799652 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.799696 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.800165 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.800307 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.800562 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.801035 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.801233 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.801244 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.801474 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.801512 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.801670 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.801696 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.801859 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.802250 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.802294 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.802313 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.802628 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.802894 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.802812 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.803022 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.803123 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.803129 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.803295 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.803783 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.803499 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.804148 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.804569 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.804942 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.805026 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.805469 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.805543 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.805743 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: E0202 10:57:31.805881 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:57:32.305854868 +0000 UTC m=+29.310103840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.806425 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.808936 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.809031 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.809101 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: E0202 10:57:31.809122 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.809135 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: E0202 10:57:31.809150 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:57:31 crc kubenswrapper[4925]: E0202 10:57:31.809163 4925 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:31 crc kubenswrapper[4925]: E0202 10:57:31.809223 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:32.309206688 +0000 UTC m=+29.313455650 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.809240 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.812180 4925 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 02 10:57:31 crc kubenswrapper[4925]: E0202 10:57:31.817215 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:57:31 crc kubenswrapper[4925]: E0202 10:57:31.817243 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:57:31 crc kubenswrapper[4925]: E0202 10:57:31.817277 4925 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:31 crc kubenswrapper[4925]: E0202 10:57:31.817344 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:32.317320565 +0000 UTC m=+29.321569737 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.817616 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.817515 4925 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-02 10:52:30 +0000 UTC, rotation deadline is 2026-11-15 01:08:59.503608521 +0000 UTC Feb 02 10:57:31 crc kubenswrapper[4925]: E0202 10:57:31.817671 4925 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.817886 4925 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6854h11m27.685932066s for next certificate rotation Feb 02 10:57:31 crc kubenswrapper[4925]: E0202 10:57:31.818172 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:32.318150427 +0000 UTC m=+29.322399589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:57:31 crc kubenswrapper[4925]: E0202 10:57:31.818233 4925 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:57:31 crc kubenswrapper[4925]: E0202 10:57:31.818273 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:32.3182632 +0000 UTC m=+29.322512372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.829253 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.829489 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.830291 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.832650 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.834657 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.834712 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.834698 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.835105 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.835274 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.836363 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.836894 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.837109 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.837678 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.838021 4925 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324" exitCode=255 Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.838058 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324"} Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.838123 4925 scope.go:117] "RemoveContainer" containerID="b33cecca7fdbdb4854caa42f471d7dc6756427abe94ab2e71a8b8b0c59973c79" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.841159 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.848423 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.848537 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.848630 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.849259 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.849313 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.849429 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.849500 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.850213 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.855185 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.855503 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.857905 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.858518 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.859132 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.861983 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.862831 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.866221 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.867646 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.868108 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.868359 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.868606 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.871099 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.871310 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.871450 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.871592 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.871593 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.871932 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.871942 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.872188 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.872523 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.872806 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.872844 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.878761 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.878889 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.879584 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.880564 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.880882 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.885452 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.889088 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.889723 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.890155 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891776 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891808 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891862 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891873 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891882 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891891 4925 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891900 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891908 4925 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891916 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891925 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891934 4925 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891943 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891952 4925 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891961 4925 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891970 4925 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891979 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891987 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.891995 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892004 4925 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892012 4925 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892021 4925 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892029 4925 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892037 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892046 4925 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892055 4925 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892064 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892072 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892096 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892104 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892114 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892123 4925 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892134 4925 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892146 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892157 4925 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892168 4925 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892178 4925 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892189 4925 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892200 4925 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892210 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892220 4925 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892229 4925 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892238 4925 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892247 4925 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892256 4925 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892266 4925 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892276 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892285 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892293 4925 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892301 4925 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892310 4925 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892318 4925 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892327 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892337 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892347 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892356 4925 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892374 4925 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892383 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892392 4925 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892400 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892410 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892418 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892426 4925 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892435 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892474 4925 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892484 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892494 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892503 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892513 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892524 4925 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892534 4925 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892545 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892563 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892575 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892585 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892595 4925 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892605 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892615 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892624 4925 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892634 4925 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892643 4925 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892653 4925 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892663 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892673 4925 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892683 4925 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892693 4925 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892701 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892710 4925 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892718 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892726 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892735 4925 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892743 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892752 4925 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892762 4925 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892771 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892781 4925 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892790 4925 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892798 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892806 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892815 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892824 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892832 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892841 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892849 4925 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892868 4925 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892876 4925 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892884 4925 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892893 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892901 4925 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892910 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892919 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892927 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892936 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892945 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892953 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892962 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892970 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892979 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892988 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.892997 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893005 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893014 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893022 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893032 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893041 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893049 4925 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893057 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893065 4925 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893074 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893104 4925 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893113 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893123 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893134 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893145 4925 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893156 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893166 4925 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893179 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893188 4925 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893198 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893206 4925 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893214 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893223 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893232 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893240 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893248 4925 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893257 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893265 4925 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893274 4925 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893281 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893289 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893297 4925 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893306 4925 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893315 4925 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893323 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893332 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893340 4925 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893348 4925 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893356 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893364 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893372 4925 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893380 4925 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893388 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893396 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893404 4925 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893412 4925 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893420 4925 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893427 4925 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893435 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893443 4925 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893452 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893462 4925 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893470 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893478 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893485 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893494 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893502 4925 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893510 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893547 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893597 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.893838 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.897476 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.898152 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.898179 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.902660 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.904004 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.904667 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.907334 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.912532 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.918146 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.923207 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.926449 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.935515 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.939059 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.943538 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.944237 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.955900 4925 scope.go:117] "RemoveContainer" containerID="3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324" Feb 02 10:57:31 crc kubenswrapper[4925]: E0202 10:57:31.956030 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.996319 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.996352 4925 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.996361 4925 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.996370 4925 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.996378 4925 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.996386 4925 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.996396 4925 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.996404 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.996412 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4925]: I0202 10:57:31.996420 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.008287 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.017256 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.400843 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:57:32 crc kubenswrapper[4925]: E0202 10:57:32.401025 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:57:33.400995828 +0000 UTC m=+30.405244800 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.401253 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.401277 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.401296 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.401339 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:32 crc kubenswrapper[4925]: E0202 10:57:32.401377 4925 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:57:32 crc kubenswrapper[4925]: E0202 10:57:32.401434 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:33.401423789 +0000 UTC m=+30.405672841 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:57:32 crc kubenswrapper[4925]: E0202 10:57:32.401433 4925 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:57:32 crc kubenswrapper[4925]: E0202 10:57:32.401454 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:57:32 crc kubenswrapper[4925]: E0202 10:57:32.401486 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:33.40147004 +0000 UTC m=+30.405718992 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:57:32 crc kubenswrapper[4925]: E0202 10:57:32.401496 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:57:32 crc kubenswrapper[4925]: E0202 10:57:32.401517 4925 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:32 crc kubenswrapper[4925]: E0202 10:57:32.401594 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:57:32 crc kubenswrapper[4925]: E0202 10:57:32.401607 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:57:32 crc kubenswrapper[4925]: E0202 10:57:32.401608 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:33.401588613 +0000 UTC m=+30.405837595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:32 crc kubenswrapper[4925]: E0202 10:57:32.401617 4925 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:32 crc kubenswrapper[4925]: E0202 10:57:32.401700 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:33.401655675 +0000 UTC m=+30.405904747 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.465820 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-kzdpz"] Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.466128 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kzdpz" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.467660 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.467704 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.468144 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.479139 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.487408 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.498353 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.507286 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.518769 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33cecca7fdbdb4854caa42f471d7dc6756427abe94ab2e71a8b8b0c59973c79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:24Z\\\",\\\"message\\\":\\\"W0202 10:57:13.717163 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 10:57:13.717520 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770029833 cert, and key in /tmp/serving-cert-3019347304/serving-signer.crt, /tmp/serving-cert-3019347304/serving-signer.key\\\\nI0202 10:57:14.023726 1 observer_polling.go:159] Starting file observer\\\\nW0202 10:57:14.027288 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 10:57:14.027497 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:14.028907 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3019347304/tls.crt::/tmp/serving-cert-3019347304/tls.key\\\\\\\"\\\\nF0202 10:57:24.495554 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.530831 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.542115 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.549523 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.603147 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsstd\" (UniqueName: \"kubernetes.io/projected/866ea9ea-2376-4958-899c-c6889eee7137-kube-api-access-gsstd\") pod \"node-resolver-kzdpz\" (UID: \"866ea9ea-2376-4958-899c-c6889eee7137\") " pod="openshift-dns/node-resolver-kzdpz" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.603202 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/866ea9ea-2376-4958-899c-c6889eee7137-hosts-file\") pod \"node-resolver-kzdpz\" (UID: \"866ea9ea-2376-4958-899c-c6889eee7137\") " pod="openshift-dns/node-resolver-kzdpz" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.629487 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 03:31:10.056375247 +0000 UTC Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.667485 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.668224 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.669747 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.670599 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.671864 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.672594 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.673348 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.674530 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.675313 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.676505 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.677193 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.678521 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.679294 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.679971 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.681225 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.681892 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.683226 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.683739 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.684502 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.685805 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.686385 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.687602 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.688175 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.689485 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.689993 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.690914 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.692282 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.692935 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.694180 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.694797 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.695869 4925 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.695996 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.698152 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.699256 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.699825 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.701887 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.702762 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.703325 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.703827 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/866ea9ea-2376-4958-899c-c6889eee7137-hosts-file\") pod \"node-resolver-kzdpz\" (UID: \"866ea9ea-2376-4958-899c-c6889eee7137\") " pod="openshift-dns/node-resolver-kzdpz" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.703856 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsstd\" (UniqueName: \"kubernetes.io/projected/866ea9ea-2376-4958-899c-c6889eee7137-kube-api-access-gsstd\") pod \"node-resolver-kzdpz\" (UID: \"866ea9ea-2376-4958-899c-c6889eee7137\") " pod="openshift-dns/node-resolver-kzdpz" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.704153 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/866ea9ea-2376-4958-899c-c6889eee7137-hosts-file\") pod \"node-resolver-kzdpz\" (UID: \"866ea9ea-2376-4958-899c-c6889eee7137\") " pod="openshift-dns/node-resolver-kzdpz" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.705055 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.706512 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.707486 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.708776 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.710184 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.711498 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.712469 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.713642 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.714685 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.716950 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.717656 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.718408 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.719000 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.719719 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.720842 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsstd\" (UniqueName: \"kubernetes.io/projected/866ea9ea-2376-4958-899c-c6889eee7137-kube-api-access-gsstd\") pod \"node-resolver-kzdpz\" (UID: \"866ea9ea-2376-4958-899c-c6889eee7137\") " pod="openshift-dns/node-resolver-kzdpz" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.721360 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.721829 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.776751 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kzdpz" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.822722 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-fphfd"] Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.823023 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-q4rr9"] Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.823272 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.823590 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.824810 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6rlpb"] Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.825634 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.828578 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-f2xkn"] Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.829049 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.829347 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.829353 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.829392 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.829488 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.834473 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.835371 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.835988 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.836166 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.836801 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.836298 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.836165 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.837865 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.838229 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.838461 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.838548 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.838977 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.839187 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.839360 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.841111 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.847639 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kzdpz" event={"ID":"866ea9ea-2376-4958-899c-c6889eee7137","Type":"ContainerStarted","Data":"bbbf53bec7423fef6dfa0a0875d0d22d4b2bc54af828d9c8d9085b654693f6f0"} Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.851802 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.853181 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53"} Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.853219 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"dee5a1e1242a2a1057576b497933db565ab28d774d95a8738940a71ba9e9c711"} Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.855938 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43"} Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.855989 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647"} Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.856003 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8554c1f91c18039331313f8ca8464a1e1557af77d985ccff898217a6ba6b21d9"} Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.863130 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.866004 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.870394 4925 scope.go:117] "RemoveContainer" containerID="3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324" Feb 02 10:57:32 crc kubenswrapper[4925]: E0202 10:57:32.870623 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.872578 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"eba0de84ee6ee7d12a73b305cf7f4c780faaaf0e218a85136babeafdebbed78a"} Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.875609 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33cecca7fdbdb4854caa42f471d7dc6756427abe94ab2e71a8b8b0c59973c79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:24Z\\\",\\\"message\\\":\\\"W0202 10:57:13.717163 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 10:57:13.717520 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770029833 cert, and key in /tmp/serving-cert-3019347304/serving-signer.crt, /tmp/serving-cert-3019347304/serving-signer.key\\\\nI0202 10:57:14.023726 1 observer_polling.go:159] Starting file observer\\\\nW0202 10:57:14.027288 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 10:57:14.027497 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:14.028907 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3019347304/tls.crt::/tmp/serving-cert-3019347304/tls.key\\\\\\\"\\\\nF0202 10:57:24.495554 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.885386 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.893614 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.900796 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.905948 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/08797ee8-d3b4-4eed-8482-c19a5b6b87c4-mcd-auth-proxy-config\") pod \"machine-config-daemon-fphfd\" (UID: \"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\") " pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.905985 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-cni-netd\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906003 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvmwh\" (UniqueName: \"kubernetes.io/projected/08797ee8-d3b4-4eed-8482-c19a5b6b87c4-kube-api-access-xvmwh\") pod \"machine-config-daemon-fphfd\" (UID: \"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\") " pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906020 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-log-socket\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906035 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906140 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-multus-cni-dir\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906170 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-cnibin\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906190 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-env-overrides\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906205 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a57c5d12-a4de-413c-a581-4b693550e8c3-ovn-node-metrics-cert\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906225 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-hostroot\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906241 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-ovnkube-config\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906259 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-var-lib-kubelet\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906326 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b84c6881-f719-456f-9135-7dfb7688a48d-multus-daemon-config\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906418 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-run-multus-certs\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906492 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-systemd\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906526 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-openvswitch\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906556 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b84c6881-f719-456f-9135-7dfb7688a48d-cni-binary-copy\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906607 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fzzh\" (UniqueName: \"kubernetes.io/projected/b84c6881-f719-456f-9135-7dfb7688a48d-kube-api-access-7fzzh\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906628 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-run-netns\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906680 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-system-cni-dir\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906698 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-cni-bin\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906715 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-var-lib-cni-bin\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906730 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/08797ee8-d3b4-4eed-8482-c19a5b6b87c4-rootfs\") pod \"machine-config-daemon-fphfd\" (UID: \"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\") " pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906744 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr96t\" (UniqueName: \"kubernetes.io/projected/a57c5d12-a4de-413c-a581-4b693550e8c3-kube-api-access-tr96t\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906772 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-multus-socket-dir-parent\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906788 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/08797ee8-d3b4-4eed-8482-c19a5b6b87c4-proxy-tls\") pod \"machine-config-daemon-fphfd\" (UID: \"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\") " pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906848 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-run-netns\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906869 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-multus-conf-dir\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906886 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-var-lib-openvswitch\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906936 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-os-release\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906952 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-kubelet\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.906966 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-ovn\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.907023 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-node-log\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.907041 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-run-ovn-kubernetes\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.907113 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-etc-kubernetes\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.907133 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-etc-openvswitch\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.907176 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-systemd-units\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.907192 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-ovnkube-script-lib\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.907208 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-run-k8s-cni-cncf-io\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.907258 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-slash\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.907531 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-var-lib-cni-multus\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.910853 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.918515 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.926434 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.935039 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.942854 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.952504 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.960868 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.970142 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:32 crc kubenswrapper[4925]: I0202 10:57:32.977503 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.007874 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008025 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/08797ee8-d3b4-4eed-8482-c19a5b6b87c4-rootfs\") pod \"machine-config-daemon-fphfd\" (UID: \"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\") " pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.007971 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/08797ee8-d3b4-4eed-8482-c19a5b6b87c4-rootfs\") pod \"machine-config-daemon-fphfd\" (UID: \"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\") " pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008112 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr96t\" (UniqueName: \"kubernetes.io/projected/a57c5d12-a4de-413c-a581-4b693550e8c3-kube-api-access-tr96t\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008151 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-var-lib-cni-bin\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008179 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/73934878-f30f-4170-aa82-716b163b9928-system-cni-dir\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008216 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-multus-socket-dir-parent\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008249 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-var-lib-cni-bin\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008237 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/08797ee8-d3b4-4eed-8482-c19a5b6b87c4-proxy-tls\") pod \"machine-config-daemon-fphfd\" (UID: \"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\") " pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008300 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-run-netns\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008321 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-multus-conf-dir\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008333 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-multus-socket-dir-parent\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008347 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-var-lib-openvswitch\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008387 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-multus-conf-dir\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008383 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-var-lib-openvswitch\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008406 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-kubelet\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008432 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-kubelet\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008479 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-ovn\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008503 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-node-log\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008503 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-run-netns\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008523 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-run-ovn-kubernetes\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008723 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-os-release\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008553 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-run-ovn-kubernetes\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008581 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-ovn\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.008563 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-node-log\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.009381 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-os-release\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.009447 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-etc-openvswitch\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.009521 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-etc-openvswitch\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.009476 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/73934878-f30f-4170-aa82-716b163b9928-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.009598 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-etc-kubernetes\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.009645 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-ovnkube-script-lib\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.009689 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-etc-kubernetes\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.009718 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-systemd-units\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.009835 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-systemd-units\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.009887 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-run-k8s-cni-cncf-io\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010465 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-ovnkube-script-lib\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010474 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-run-k8s-cni-cncf-io\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010533 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-slash\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010560 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/73934878-f30f-4170-aa82-716b163b9928-cnibin\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010585 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-var-lib-cni-multus\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010609 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/73934878-f30f-4170-aa82-716b163b9928-os-release\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010666 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-cni-netd\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010691 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/08797ee8-d3b4-4eed-8482-c19a5b6b87c4-mcd-auth-proxy-config\") pod \"machine-config-daemon-fphfd\" (UID: \"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\") " pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010715 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvmwh\" (UniqueName: \"kubernetes.io/projected/08797ee8-d3b4-4eed-8482-c19a5b6b87c4-kube-api-access-xvmwh\") pod \"machine-config-daemon-fphfd\" (UID: \"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\") " pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010735 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-log-socket\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010757 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010781 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/73934878-f30f-4170-aa82-716b163b9928-cni-binary-copy\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010839 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-multus-cni-dir\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010860 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-cnibin\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010880 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-env-overrides\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010904 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a57c5d12-a4de-413c-a581-4b693550e8c3-ovn-node-metrics-cert\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010929 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-ovnkube-config\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010951 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz6l2\" (UniqueName: \"kubernetes.io/projected/73934878-f30f-4170-aa82-716b163b9928-kube-api-access-zz6l2\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010973 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-hostroot\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.010992 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-systemd\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.011011 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-openvswitch\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.011035 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-var-lib-kubelet\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.011058 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b84c6881-f719-456f-9135-7dfb7688a48d-multus-daemon-config\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.011102 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-run-multus-certs\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.011126 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/73934878-f30f-4170-aa82-716b163b9928-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.011153 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b84c6881-f719-456f-9135-7dfb7688a48d-cni-binary-copy\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.011177 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fzzh\" (UniqueName: \"kubernetes.io/projected/b84c6881-f719-456f-9135-7dfb7688a48d-kube-api-access-7fzzh\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.011197 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-run-netns\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.011218 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-system-cni-dir\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.011237 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-cni-bin\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.011333 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-cni-bin\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.011710 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-slash\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.011740 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-multus-cni-dir\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.011763 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-var-lib-cni-multus\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.011788 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-cnibin\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.012160 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-var-lib-kubelet\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.012188 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-cni-netd\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.012218 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-env-overrides\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.012866 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b84c6881-f719-456f-9135-7dfb7688a48d-multus-daemon-config\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.012904 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/08797ee8-d3b4-4eed-8482-c19a5b6b87c4-mcd-auth-proxy-config\") pod \"machine-config-daemon-fphfd\" (UID: \"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\") " pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.012926 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-host-run-multus-certs\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.012961 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-log-socket\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.012994 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.013032 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-hostroot\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.013630 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-ovnkube-config\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.013697 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-run-netns\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.013764 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b84c6881-f719-456f-9135-7dfb7688a48d-cni-binary-copy\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.013916 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b84c6881-f719-456f-9135-7dfb7688a48d-system-cni-dir\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.013965 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-systemd\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.013973 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-openvswitch\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.015064 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a57c5d12-a4de-413c-a581-4b693550e8c3-ovn-node-metrics-cert\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.020625 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/08797ee8-d3b4-4eed-8482-c19a5b6b87c4-proxy-tls\") pod \"machine-config-daemon-fphfd\" (UID: \"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\") " pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.034550 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.038677 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr96t\" (UniqueName: \"kubernetes.io/projected/a57c5d12-a4de-413c-a581-4b693550e8c3-kube-api-access-tr96t\") pod \"ovnkube-node-6rlpb\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.041689 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fzzh\" (UniqueName: \"kubernetes.io/projected/b84c6881-f719-456f-9135-7dfb7688a48d-kube-api-access-7fzzh\") pod \"multus-q4rr9\" (UID: \"b84c6881-f719-456f-9135-7dfb7688a48d\") " pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.046823 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvmwh\" (UniqueName: \"kubernetes.io/projected/08797ee8-d3b4-4eed-8482-c19a5b6b87c4-kube-api-access-xvmwh\") pod \"machine-config-daemon-fphfd\" (UID: \"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\") " pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.056724 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.066987 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.075405 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.086444 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.095412 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.111979 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/73934878-f30f-4170-aa82-716b163b9928-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.112017 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/73934878-f30f-4170-aa82-716b163b9928-cnibin\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.112033 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/73934878-f30f-4170-aa82-716b163b9928-os-release\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.112058 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/73934878-f30f-4170-aa82-716b163b9928-cni-binary-copy\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.112109 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz6l2\" (UniqueName: \"kubernetes.io/projected/73934878-f30f-4170-aa82-716b163b9928-kube-api-access-zz6l2\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.112129 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/73934878-f30f-4170-aa82-716b163b9928-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.112174 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/73934878-f30f-4170-aa82-716b163b9928-system-cni-dir\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.112209 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/73934878-f30f-4170-aa82-716b163b9928-cnibin\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.112224 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/73934878-f30f-4170-aa82-716b163b9928-system-cni-dir\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.112277 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/73934878-f30f-4170-aa82-716b163b9928-os-release\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.112809 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/73934878-f30f-4170-aa82-716b163b9928-cni-binary-copy\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.112915 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/73934878-f30f-4170-aa82-716b163b9928-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.119538 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/73934878-f30f-4170-aa82-716b163b9928-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.126826 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz6l2\" (UniqueName: \"kubernetes.io/projected/73934878-f30f-4170-aa82-716b163b9928-kube-api-access-zz6l2\") pod \"multus-additional-cni-plugins-f2xkn\" (UID: \"73934878-f30f-4170-aa82-716b163b9928\") " pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.179270 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q4rr9" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.187733 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 10:57:33 crc kubenswrapper[4925]: W0202 10:57:33.194532 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb84c6881_f719_456f_9135_7dfb7688a48d.slice/crio-7d703568cee596ee69b5058ca5512f7d966bf7f9530e35ab09d57b0c7751f383 WatchSource:0}: Error finding container 7d703568cee596ee69b5058ca5512f7d966bf7f9530e35ab09d57b0c7751f383: Status 404 returned error can't find the container with id 7d703568cee596ee69b5058ca5512f7d966bf7f9530e35ab09d57b0c7751f383 Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.194942 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:33 crc kubenswrapper[4925]: W0202 10:57:33.202191 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08797ee8_d3b4_4eed_8482_c19a5b6b87c4.slice/crio-0793f9244317a31e46ae2c2a109fa44217476f4802492e58d3ce9910c8b4f91a WatchSource:0}: Error finding container 0793f9244317a31e46ae2c2a109fa44217476f4802492e58d3ce9910c8b4f91a: Status 404 returned error can't find the container with id 0793f9244317a31e46ae2c2a109fa44217476f4802492e58d3ce9910c8b4f91a Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.203842 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.280005 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.291104 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.296512 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.299692 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.303128 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.322765 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:33 crc kubenswrapper[4925]: W0202 10:57:33.329780 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda57c5d12_a4de_413c_a581_4b693550e8c3.slice/crio-1b99cb00f8af15785503e47f7f140df80e76860f057c2cb3056d9138a36333bf WatchSource:0}: Error finding container 1b99cb00f8af15785503e47f7f140df80e76860f057c2cb3056d9138a36333bf: Status 404 returned error can't find the container with id 1b99cb00f8af15785503e47f7f140df80e76860f057c2cb3056d9138a36333bf Feb 02 10:57:33 crc kubenswrapper[4925]: W0202 10:57:33.332664 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73934878_f30f_4170_aa82_716b163b9928.slice/crio-14abe26100400a8eb1a230467043123041001dd7bc4540e642eea49c9aae6460 WatchSource:0}: Error finding container 14abe26100400a8eb1a230467043123041001dd7bc4540e642eea49c9aae6460: Status 404 returned error can't find the container with id 14abe26100400a8eb1a230467043123041001dd7bc4540e642eea49c9aae6460 Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.337663 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.346269 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.354936 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.364719 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.375439 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.387885 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.397211 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.413431 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.415407 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.415518 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.415575 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.415594 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:33 crc kubenswrapper[4925]: E0202 10:57:33.415653 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:57:35.415601163 +0000 UTC m=+32.419850165 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:57:33 crc kubenswrapper[4925]: E0202 10:57:33.415704 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:57:33 crc kubenswrapper[4925]: E0202 10:57:33.415720 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:57:33 crc kubenswrapper[4925]: E0202 10:57:33.415730 4925 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:33 crc kubenswrapper[4925]: E0202 10:57:33.415730 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:57:33 crc kubenswrapper[4925]: E0202 10:57:33.415741 4925 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:57:33 crc kubenswrapper[4925]: E0202 10:57:33.415750 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:57:33 crc kubenswrapper[4925]: E0202 10:57:33.415768 4925 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:33 crc kubenswrapper[4925]: E0202 10:57:33.415777 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:35.415758577 +0000 UTC m=+32.420007539 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:33 crc kubenswrapper[4925]: E0202 10:57:33.415791 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:35.415785498 +0000 UTC m=+32.420034460 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:57:33 crc kubenswrapper[4925]: E0202 10:57:33.415799 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:35.415795208 +0000 UTC m=+32.420044170 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.415737 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:33 crc kubenswrapper[4925]: E0202 10:57:33.415964 4925 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:57:33 crc kubenswrapper[4925]: E0202 10:57:33.416069 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:35.416043475 +0000 UTC m=+32.420292507 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.429749 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.443331 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.461932 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.483314 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.500983 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.517604 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.535454 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.547306 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.572638 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.585240 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.596737 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.607330 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.624932 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.630642 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:10:43.01978512 +0000 UTC Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.638474 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.664176 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.664216 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.664178 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:33 crc kubenswrapper[4925]: E0202 10:57:33.664308 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:57:33 crc kubenswrapper[4925]: E0202 10:57:33.664368 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:57:33 crc kubenswrapper[4925]: E0202 10:57:33.664428 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.876912 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kzdpz" event={"ID":"866ea9ea-2376-4958-899c-c6889eee7137","Type":"ContainerStarted","Data":"2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d"} Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.878150 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerStarted","Data":"1b99cb00f8af15785503e47f7f140df80e76860f057c2cb3056d9138a36333bf"} Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.879092 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"0793f9244317a31e46ae2c2a109fa44217476f4802492e58d3ce9910c8b4f91a"} Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.879965 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" event={"ID":"73934878-f30f-4170-aa82-716b163b9928","Type":"ContainerStarted","Data":"14abe26100400a8eb1a230467043123041001dd7bc4540e642eea49c9aae6460"} Feb 02 10:57:33 crc kubenswrapper[4925]: I0202 10:57:33.881961 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q4rr9" event={"ID":"b84c6881-f719-456f-9135-7dfb7688a48d","Type":"ContainerStarted","Data":"7d703568cee596ee69b5058ca5512f7d966bf7f9530e35ab09d57b0c7751f383"} Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.138569 4925 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.631098 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:09:40.932972298 +0000 UTC Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.682349 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.702798 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.721368 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.748814 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.767317 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.812282 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.827591 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.847441 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.863477 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.876189 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.884812 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789"} Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.886452 4925 generic.go:334] "Generic (PLEG): container finished" podID="73934878-f30f-4170-aa82-716b163b9928" containerID="1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637" exitCode=0 Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.886537 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" event={"ID":"73934878-f30f-4170-aa82-716b163b9928","Type":"ContainerDied","Data":"1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637"} Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.888201 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.889066 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q4rr9" event={"ID":"b84c6881-f719-456f-9135-7dfb7688a48d","Type":"ContainerStarted","Data":"3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e"} Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.890689 4925 generic.go:334] "Generic (PLEG): container finished" podID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerID="1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466" exitCode=0 Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.890728 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerDied","Data":"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466"} Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.892906 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520"} Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.892938 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48"} Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.905659 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.924400 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.940359 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.965923 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:34 crc kubenswrapper[4925]: I0202 10:57:34.987507 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.008243 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.027827 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.044000 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.054628 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.066910 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.078296 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.090564 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.104239 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.119012 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.135941 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.438686 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.438801 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.438826 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.438847 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:35 crc kubenswrapper[4925]: E0202 10:57:35.438926 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:57:39.438892238 +0000 UTC m=+36.443141210 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:57:35 crc kubenswrapper[4925]: E0202 10:57:35.438946 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:57:35 crc kubenswrapper[4925]: E0202 10:57:35.438967 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:57:35 crc kubenswrapper[4925]: E0202 10:57:35.438978 4925 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.439009 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:35 crc kubenswrapper[4925]: E0202 10:57:35.439019 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:39.439006411 +0000 UTC m=+36.443255373 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:35 crc kubenswrapper[4925]: E0202 10:57:35.439160 4925 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:57:35 crc kubenswrapper[4925]: E0202 10:57:35.439301 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:39.439267228 +0000 UTC m=+36.443516230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:57:35 crc kubenswrapper[4925]: E0202 10:57:35.439022 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:57:35 crc kubenswrapper[4925]: E0202 10:57:35.439374 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:57:35 crc kubenswrapper[4925]: E0202 10:57:35.439403 4925 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:35 crc kubenswrapper[4925]: E0202 10:57:35.439100 4925 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:57:35 crc kubenswrapper[4925]: E0202 10:57:35.439466 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:39.439446723 +0000 UTC m=+36.443695765 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:35 crc kubenswrapper[4925]: E0202 10:57:35.439499 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:39.439484694 +0000 UTC m=+36.443733766 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.631958 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:27:20.272075689 +0000 UTC Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.663811 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.663822 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:35 crc kubenswrapper[4925]: E0202 10:57:35.663932 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:57:35 crc kubenswrapper[4925]: E0202 10:57:35.664049 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.663836 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:35 crc kubenswrapper[4925]: E0202 10:57:35.664303 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.782879 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lp7j8"] Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.783597 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lp7j8" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.786065 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.786391 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.786455 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.788175 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.799029 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.823404 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.838064 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.854239 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.864686 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.875341 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.885213 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.899770 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.905361 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" event={"ID":"73934878-f30f-4170-aa82-716b163b9928","Type":"ContainerStarted","Data":"335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a"} Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.908693 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerStarted","Data":"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b"} Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.908726 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerStarted","Data":"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e"} Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.908739 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerStarted","Data":"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5"} Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.908751 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerStarted","Data":"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161"} Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.908764 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerStarted","Data":"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8"} Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.910315 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.917541 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.921985 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.924454 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.925023 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.938891 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.944154 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdxnj\" (UniqueName: \"kubernetes.io/projected/43ec29b9-abb0-4fb5-8463-ff2860921d8b-kube-api-access-cdxnj\") pod \"node-ca-lp7j8\" (UID: \"43ec29b9-abb0-4fb5-8463-ff2860921d8b\") " pod="openshift-image-registry/node-ca-lp7j8" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.944253 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/43ec29b9-abb0-4fb5-8463-ff2860921d8b-serviceca\") pod \"node-ca-lp7j8\" (UID: \"43ec29b9-abb0-4fb5-8463-ff2860921d8b\") " pod="openshift-image-registry/node-ca-lp7j8" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.944278 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/43ec29b9-abb0-4fb5-8463-ff2860921d8b-host\") pod \"node-ca-lp7j8\" (UID: \"43ec29b9-abb0-4fb5-8463-ff2860921d8b\") " pod="openshift-image-registry/node-ca-lp7j8" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.950931 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.964928 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.976246 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:35 crc kubenswrapper[4925]: I0202 10:57:35.990966 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.008399 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.020493 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.029586 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.039243 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.045665 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/43ec29b9-abb0-4fb5-8463-ff2860921d8b-serviceca\") pod \"node-ca-lp7j8\" (UID: \"43ec29b9-abb0-4fb5-8463-ff2860921d8b\") " pod="openshift-image-registry/node-ca-lp7j8" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.045727 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/43ec29b9-abb0-4fb5-8463-ff2860921d8b-host\") pod \"node-ca-lp7j8\" (UID: \"43ec29b9-abb0-4fb5-8463-ff2860921d8b\") " pod="openshift-image-registry/node-ca-lp7j8" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.045787 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdxnj\" (UniqueName: \"kubernetes.io/projected/43ec29b9-abb0-4fb5-8463-ff2860921d8b-kube-api-access-cdxnj\") pod \"node-ca-lp7j8\" (UID: \"43ec29b9-abb0-4fb5-8463-ff2860921d8b\") " pod="openshift-image-registry/node-ca-lp7j8" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.046205 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/43ec29b9-abb0-4fb5-8463-ff2860921d8b-host\") pod \"node-ca-lp7j8\" (UID: \"43ec29b9-abb0-4fb5-8463-ff2860921d8b\") " pod="openshift-image-registry/node-ca-lp7j8" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.046825 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/43ec29b9-abb0-4fb5-8463-ff2860921d8b-serviceca\") pod \"node-ca-lp7j8\" (UID: \"43ec29b9-abb0-4fb5-8463-ff2860921d8b\") " pod="openshift-image-registry/node-ca-lp7j8" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.061054 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.064049 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdxnj\" (UniqueName: \"kubernetes.io/projected/43ec29b9-abb0-4fb5-8463-ff2860921d8b-kube-api-access-cdxnj\") pod \"node-ca-lp7j8\" (UID: \"43ec29b9-abb0-4fb5-8463-ff2860921d8b\") " pod="openshift-image-registry/node-ca-lp7j8" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.073435 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.087195 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.099849 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.112999 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.115472 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lp7j8" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.128439 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: W0202 10:57:36.128653 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43ec29b9_abb0_4fb5_8463_ff2860921d8b.slice/crio-a5f088bf819b36a6e81dee97d16e7063567d2cebce22d381f100663fce35b249 WatchSource:0}: Error finding container a5f088bf819b36a6e81dee97d16e7063567d2cebce22d381f100663fce35b249: Status 404 returned error can't find the container with id a5f088bf819b36a6e81dee97d16e7063567d2cebce22d381f100663fce35b249 Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.139878 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.152170 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.164336 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.179222 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.633150 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:06:50.986302411 +0000 UTC Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.914934 4925 generic.go:334] "Generic (PLEG): container finished" podID="73934878-f30f-4170-aa82-716b163b9928" containerID="335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a" exitCode=0 Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.914999 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" event={"ID":"73934878-f30f-4170-aa82-716b163b9928","Type":"ContainerDied","Data":"335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a"} Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.921509 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerStarted","Data":"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9"} Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.931139 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lp7j8" event={"ID":"43ec29b9-abb0-4fb5-8463-ff2860921d8b","Type":"ContainerStarted","Data":"3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08"} Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.931194 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lp7j8" event={"ID":"43ec29b9-abb0-4fb5-8463-ff2860921d8b","Type":"ContainerStarted","Data":"a5f088bf819b36a6e81dee97d16e7063567d2cebce22d381f100663fce35b249"} Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.936535 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.953486 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.965908 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.976000 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:36 crc kubenswrapper[4925]: I0202 10:57:36.991193 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.003590 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.031841 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.051393 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.070942 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.090304 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.103774 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.113125 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.127387 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.140239 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.149370 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.159273 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.171623 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.183527 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.198664 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.212731 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.243280 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.268851 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.282650 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.310410 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.326278 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.347118 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.359211 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.371242 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.383929 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.393984 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.633955 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 17:47:23.994156005 +0000 UTC Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.663354 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.663526 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:37 crc kubenswrapper[4925]: E0202 10:57:37.663557 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.663727 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:37 crc kubenswrapper[4925]: E0202 10:57:37.663945 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:57:37 crc kubenswrapper[4925]: E0202 10:57:37.664237 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.777654 4925 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.780755 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.780824 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.780844 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.781009 4925 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.793608 4925 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.794013 4925 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.795668 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.795754 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.795780 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.795814 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.795925 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:37Z","lastTransitionTime":"2026-02-02T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:37 crc kubenswrapper[4925]: E0202 10:57:37.826030 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.832941 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.833002 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.833016 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.833040 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.833056 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:37Z","lastTransitionTime":"2026-02-02T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:37 crc kubenswrapper[4925]: E0202 10:57:37.856458 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.860956 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.861004 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.861019 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.861039 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.861052 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:37Z","lastTransitionTime":"2026-02-02T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:37 crc kubenswrapper[4925]: E0202 10:57:37.878659 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.883179 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.883228 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.883238 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.883253 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.883264 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:37Z","lastTransitionTime":"2026-02-02T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:37 crc kubenswrapper[4925]: E0202 10:57:37.894966 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.898897 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.898922 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.898932 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.898947 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.898957 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:37Z","lastTransitionTime":"2026-02-02T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:37 crc kubenswrapper[4925]: E0202 10:57:37.911204 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: E0202 10:57:37.911331 4925 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.913029 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.913067 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.913104 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.913122 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.913133 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:37Z","lastTransitionTime":"2026-02-02T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.936281 4925 generic.go:334] "Generic (PLEG): container finished" podID="73934878-f30f-4170-aa82-716b163b9928" containerID="2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef" exitCode=0 Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.936330 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" event={"ID":"73934878-f30f-4170-aa82-716b163b9928","Type":"ContainerDied","Data":"2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef"} Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.954903 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.973681 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:37 crc kubenswrapper[4925]: I0202 10:57:37.992693 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.005503 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.017617 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.020672 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.020702 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.020711 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.020725 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.020734 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:38Z","lastTransitionTime":"2026-02-02T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.030420 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.041593 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.053828 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.062565 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.075783 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.089343 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.104463 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.114918 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.123058 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.123115 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.123126 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.123140 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.123150 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:38Z","lastTransitionTime":"2026-02-02T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.126834 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.137706 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.225393 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.225449 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.225458 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.225474 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.225485 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:38Z","lastTransitionTime":"2026-02-02T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.328560 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.328601 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.328617 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.328633 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.328681 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:38Z","lastTransitionTime":"2026-02-02T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.431776 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.431831 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.431852 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.431874 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.431889 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:38Z","lastTransitionTime":"2026-02-02T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.534947 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.534979 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.534989 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.535003 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.535012 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:38Z","lastTransitionTime":"2026-02-02T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.634989 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 15:30:08.415327023 +0000 UTC Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.637613 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.637658 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.637669 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.637686 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.637699 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:38Z","lastTransitionTime":"2026-02-02T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.739826 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.740121 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.740259 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.740417 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.740581 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:38Z","lastTransitionTime":"2026-02-02T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.843626 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.843666 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.843677 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.843692 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.843701 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:38Z","lastTransitionTime":"2026-02-02T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.943646 4925 generic.go:334] "Generic (PLEG): container finished" podID="73934878-f30f-4170-aa82-716b163b9928" containerID="d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0" exitCode=0 Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.943718 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" event={"ID":"73934878-f30f-4170-aa82-716b163b9928","Type":"ContainerDied","Data":"d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0"} Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.946826 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.946881 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.946900 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.946925 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.946942 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:38Z","lastTransitionTime":"2026-02-02T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.953410 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerStarted","Data":"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810"} Feb 02 10:57:38 crc kubenswrapper[4925]: I0202 10:57:38.982133 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.002059 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.015977 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.031689 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.051136 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.051223 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.051242 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.051292 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.051311 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:39Z","lastTransitionTime":"2026-02-02T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.063591 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.083728 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.099223 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.111978 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.125204 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.138980 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.154053 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.154120 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.154132 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.154152 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.154166 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:39Z","lastTransitionTime":"2026-02-02T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.157826 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.169892 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.181606 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.194571 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.205932 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.256657 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.256699 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.256710 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.256728 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.256740 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:39Z","lastTransitionTime":"2026-02-02T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.359491 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.359520 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.359529 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.359541 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.359553 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:39Z","lastTransitionTime":"2026-02-02T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.461827 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.461866 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.461878 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.461894 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.461906 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:39Z","lastTransitionTime":"2026-02-02T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.482065 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.482212 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.482252 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:39 crc kubenswrapper[4925]: E0202 10:57:39.482275 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:57:47.48224998 +0000 UTC m=+44.486498952 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.482310 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:39 crc kubenswrapper[4925]: E0202 10:57:39.482342 4925 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.482358 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:39 crc kubenswrapper[4925]: E0202 10:57:39.482388 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:47.482375243 +0000 UTC m=+44.486624205 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:57:39 crc kubenswrapper[4925]: E0202 10:57:39.482472 4925 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:57:39 crc kubenswrapper[4925]: E0202 10:57:39.482509 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:47.482499787 +0000 UTC m=+44.486748759 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:57:39 crc kubenswrapper[4925]: E0202 10:57:39.482574 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:57:39 crc kubenswrapper[4925]: E0202 10:57:39.482587 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:57:39 crc kubenswrapper[4925]: E0202 10:57:39.482598 4925 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:39 crc kubenswrapper[4925]: E0202 10:57:39.482626 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:47.48261816 +0000 UTC m=+44.486867132 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:39 crc kubenswrapper[4925]: E0202 10:57:39.482678 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:57:39 crc kubenswrapper[4925]: E0202 10:57:39.482690 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:57:39 crc kubenswrapper[4925]: E0202 10:57:39.482698 4925 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:39 crc kubenswrapper[4925]: E0202 10:57:39.482725 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:57:47.482714622 +0000 UTC m=+44.486963594 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.540780 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.541580 4925 scope.go:117] "RemoveContainer" containerID="3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324" Feb 02 10:57:39 crc kubenswrapper[4925]: E0202 10:57:39.541773 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.564575 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.564651 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.564675 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.564706 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.564730 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:39Z","lastTransitionTime":"2026-02-02T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.635473 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:17:32.071950469 +0000 UTC Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.663978 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.664131 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:39 crc kubenswrapper[4925]: E0202 10:57:39.664255 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:57:39 crc kubenswrapper[4925]: E0202 10:57:39.664559 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.664665 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:39 crc kubenswrapper[4925]: E0202 10:57:39.664779 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.666554 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.666618 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.666642 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.666669 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.666690 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:39Z","lastTransitionTime":"2026-02-02T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.769517 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.769589 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.769614 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.769643 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.769666 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:39Z","lastTransitionTime":"2026-02-02T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.873456 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.873521 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.873543 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.873573 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.873598 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:39Z","lastTransitionTime":"2026-02-02T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.968778 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" event={"ID":"73934878-f30f-4170-aa82-716b163b9928","Type":"ContainerStarted","Data":"98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470"} Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.976709 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.976773 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.976793 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.976817 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.976836 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:39Z","lastTransitionTime":"2026-02-02T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:39 crc kubenswrapper[4925]: I0202 10:57:39.995536 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:39Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.015372 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.029753 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.042755 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.058302 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.071344 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.079410 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.079447 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.079458 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.079475 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.079486 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:40Z","lastTransitionTime":"2026-02-02T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.087451 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.116424 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.133892 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.146459 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.157593 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.176647 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.182677 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.182748 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.182772 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.182805 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.182830 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:40Z","lastTransitionTime":"2026-02-02T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.192917 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.212780 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.227635 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.286563 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.286639 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.286659 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.286682 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.286701 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:40Z","lastTransitionTime":"2026-02-02T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.389359 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.389394 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.389408 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.389423 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.389436 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:40Z","lastTransitionTime":"2026-02-02T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.491503 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.491744 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.491892 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.491989 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.492069 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:40Z","lastTransitionTime":"2026-02-02T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.593848 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.594103 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.594184 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.594253 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.594315 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:40Z","lastTransitionTime":"2026-02-02T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.636319 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 07:59:34.743008857 +0000 UTC Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.695735 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.695791 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.695810 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.695834 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.695850 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:40Z","lastTransitionTime":"2026-02-02T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.799496 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.799551 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.799570 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.799595 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.799611 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:40Z","lastTransitionTime":"2026-02-02T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.902286 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.902330 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.902346 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.902363 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.902376 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:40Z","lastTransitionTime":"2026-02-02T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.985316 4925 generic.go:334] "Generic (PLEG): container finished" podID="73934878-f30f-4170-aa82-716b163b9928" containerID="98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470" exitCode=0 Feb 02 10:57:40 crc kubenswrapper[4925]: I0202 10:57:40.985362 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" event={"ID":"73934878-f30f-4170-aa82-716b163b9928","Type":"ContainerDied","Data":"98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470"} Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.004368 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.004391 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.004400 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.004413 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.004422 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:41Z","lastTransitionTime":"2026-02-02T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.006847 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.029922 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.047155 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.062309 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.073366 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.083959 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.093446 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.107487 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.107520 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.107531 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.107548 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.107559 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:41Z","lastTransitionTime":"2026-02-02T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.110782 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.122414 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.146183 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.156303 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.167593 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.177372 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.189180 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.200055 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.210293 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.210365 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.210377 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.210417 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.210433 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:41Z","lastTransitionTime":"2026-02-02T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.312689 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.312727 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.312739 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.312756 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.312767 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:41Z","lastTransitionTime":"2026-02-02T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.415341 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.415385 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.415395 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.415411 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.415425 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:41Z","lastTransitionTime":"2026-02-02T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.518286 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.518336 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.518351 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.518368 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.518381 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:41Z","lastTransitionTime":"2026-02-02T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.620606 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.620637 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.620646 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.620663 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.620672 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:41Z","lastTransitionTime":"2026-02-02T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.636650 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 20:20:22.836319923 +0000 UTC Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.664290 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.664369 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:41 crc kubenswrapper[4925]: E0202 10:57:41.664406 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.664291 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:41 crc kubenswrapper[4925]: E0202 10:57:41.664538 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:57:41 crc kubenswrapper[4925]: E0202 10:57:41.664719 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.723869 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.723905 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.723915 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.723929 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.723939 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:41Z","lastTransitionTime":"2026-02-02T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.827712 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.827753 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.827764 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.827778 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.827789 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:41Z","lastTransitionTime":"2026-02-02T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.934514 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.934549 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.934560 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.934573 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.934582 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:41Z","lastTransitionTime":"2026-02-02T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.993966 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerStarted","Data":"9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa"} Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.994312 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:41 crc kubenswrapper[4925]: I0202 10:57:41.994340 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.000881 4925 generic.go:334] "Generic (PLEG): container finished" podID="73934878-f30f-4170-aa82-716b163b9928" containerID="95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175" exitCode=0 Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.000927 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" event={"ID":"73934878-f30f-4170-aa82-716b163b9928","Type":"ContainerDied","Data":"95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175"} Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.014445 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.021780 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.022920 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.026373 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.036388 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.036416 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.036425 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.036438 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.036447 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:42Z","lastTransitionTime":"2026-02-02T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.038880 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.053111 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.064578 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.076241 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.090598 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.103965 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.114625 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.132987 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.144970 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.145014 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.145028 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.145045 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.145059 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:42Z","lastTransitionTime":"2026-02-02T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.149401 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.161699 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.172199 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.192211 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.210695 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.225533 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.241498 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.247756 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.247802 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.247817 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.247834 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.247846 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:42Z","lastTransitionTime":"2026-02-02T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.251389 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.265237 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.278908 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.291805 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.304167 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.318712 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.330941 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.349826 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.349865 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.349877 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.349893 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.349906 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:42Z","lastTransitionTime":"2026-02-02T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.354763 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.377505 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.396423 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.409434 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.424212 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.434538 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.451991 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.452032 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.452043 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.452057 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.452066 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:42Z","lastTransitionTime":"2026-02-02T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.554570 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.554614 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.554704 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.554743 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.554792 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:42Z","lastTransitionTime":"2026-02-02T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.637152 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:11:47.260864199 +0000 UTC Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.658392 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.658456 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.658479 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.658503 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.658522 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:42Z","lastTransitionTime":"2026-02-02T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.761356 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.761402 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.761410 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.761426 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.761435 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:42Z","lastTransitionTime":"2026-02-02T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.863836 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.863875 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.863887 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.863904 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.863916 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:42Z","lastTransitionTime":"2026-02-02T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.967221 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.967295 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.967309 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.967329 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:42 crc kubenswrapper[4925]: I0202 10:57:42.967344 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:42Z","lastTransitionTime":"2026-02-02T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.009374 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" event={"ID":"73934878-f30f-4170-aa82-716b163b9928","Type":"ContainerStarted","Data":"8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c"} Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.009458 4925 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.026587 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.045346 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.067756 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.069353 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.069400 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.069441 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.069454 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.069462 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:43Z","lastTransitionTime":"2026-02-02T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.100417 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.123833 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.149905 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.164515 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.172226 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.172273 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.172286 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.172305 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.172317 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:43Z","lastTransitionTime":"2026-02-02T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.177469 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.197298 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.218771 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.237170 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.255701 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.267996 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.274472 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.274514 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.274524 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.274541 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.274553 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:43Z","lastTransitionTime":"2026-02-02T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.280385 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.289640 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.376481 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.376549 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.376566 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.376589 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.376606 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:43Z","lastTransitionTime":"2026-02-02T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.479561 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.479609 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.479618 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.479632 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.479641 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:43Z","lastTransitionTime":"2026-02-02T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.582599 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.582661 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.582681 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.582707 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.582725 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:43Z","lastTransitionTime":"2026-02-02T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.638329 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 16:58:00.160490149 +0000 UTC Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.664150 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.664220 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:43 crc kubenswrapper[4925]: E0202 10:57:43.664319 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.664163 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:43 crc kubenswrapper[4925]: E0202 10:57:43.664466 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:57:43 crc kubenswrapper[4925]: E0202 10:57:43.664591 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.685901 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.685944 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.685956 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.685974 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.685986 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:43Z","lastTransitionTime":"2026-02-02T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.789019 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.789118 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.789143 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.789172 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.789194 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:43Z","lastTransitionTime":"2026-02-02T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.892245 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.892327 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.892345 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.892367 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.892384 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:43Z","lastTransitionTime":"2026-02-02T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.994364 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.994407 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.994422 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.994439 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:43 crc kubenswrapper[4925]: I0202 10:57:43.994451 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:43Z","lastTransitionTime":"2026-02-02T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.012375 4925 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.110506 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.110546 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.110557 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.110572 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.110583 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:44Z","lastTransitionTime":"2026-02-02T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.214050 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.214132 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.214150 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.214173 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.214189 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:44Z","lastTransitionTime":"2026-02-02T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.316618 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.316690 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.316709 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.316732 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.316745 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:44Z","lastTransitionTime":"2026-02-02T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.419730 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.419786 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.419802 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.419824 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.419844 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:44Z","lastTransitionTime":"2026-02-02T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.522294 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.522334 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.522343 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.522357 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.522365 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:44Z","lastTransitionTime":"2026-02-02T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.624305 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.624350 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.624361 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.624374 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.624383 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:44Z","lastTransitionTime":"2026-02-02T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.638720 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 02:00:02.715156045 +0000 UTC Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.676800 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.692427 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.704981 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.727463 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.727566 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.727581 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.727601 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.727617 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:44Z","lastTransitionTime":"2026-02-02T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.729984 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.745715 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.766302 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.781164 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.794745 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.810470 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.821035 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.830269 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.830325 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.830344 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.830367 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.830383 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:44Z","lastTransitionTime":"2026-02-02T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.835191 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.849939 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.863839 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.883164 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.899512 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.933468 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.933521 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.933531 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.933545 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:44 crc kubenswrapper[4925]: I0202 10:57:44.933553 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:44Z","lastTransitionTime":"2026-02-02T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.036941 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.037013 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.037036 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.037063 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.037109 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:45Z","lastTransitionTime":"2026-02-02T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.140364 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.140420 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.140433 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.140451 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.140463 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:45Z","lastTransitionTime":"2026-02-02T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.242795 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.242827 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.242836 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.242851 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.242861 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:45Z","lastTransitionTime":"2026-02-02T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.344630 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.344695 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.344717 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.344741 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.344757 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:45Z","lastTransitionTime":"2026-02-02T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.447882 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.447925 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.447937 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.447954 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.447967 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:45Z","lastTransitionTime":"2026-02-02T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.470309 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt"] Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.470787 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.477231 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9551801b-8926-4673-942b-bcd89aa4eb7b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wjwxt\" (UID: \"9551801b-8926-4673-942b-bcd89aa4eb7b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.477296 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9551801b-8926-4673-942b-bcd89aa4eb7b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wjwxt\" (UID: \"9551801b-8926-4673-942b-bcd89aa4eb7b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.477367 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9551801b-8926-4673-942b-bcd89aa4eb7b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wjwxt\" (UID: \"9551801b-8926-4673-942b-bcd89aa4eb7b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.477437 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxmwg\" (UniqueName: \"kubernetes.io/projected/9551801b-8926-4673-942b-bcd89aa4eb7b-kube-api-access-kxmwg\") pod \"ovnkube-control-plane-749d76644c-wjwxt\" (UID: \"9551801b-8926-4673-942b-bcd89aa4eb7b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.480421 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.480656 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.491816 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.515403 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.532856 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.549458 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.549945 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.549970 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.549979 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.549991 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.550002 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:45Z","lastTransitionTime":"2026-02-02T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.559306 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.579204 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9551801b-8926-4673-942b-bcd89aa4eb7b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wjwxt\" (UID: \"9551801b-8926-4673-942b-bcd89aa4eb7b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.579269 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxmwg\" (UniqueName: \"kubernetes.io/projected/9551801b-8926-4673-942b-bcd89aa4eb7b-kube-api-access-kxmwg\") pod \"ovnkube-control-plane-749d76644c-wjwxt\" (UID: \"9551801b-8926-4673-942b-bcd89aa4eb7b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.579309 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9551801b-8926-4673-942b-bcd89aa4eb7b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wjwxt\" (UID: \"9551801b-8926-4673-942b-bcd89aa4eb7b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.579330 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9551801b-8926-4673-942b-bcd89aa4eb7b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wjwxt\" (UID: \"9551801b-8926-4673-942b-bcd89aa4eb7b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.580489 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9551801b-8926-4673-942b-bcd89aa4eb7b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wjwxt\" (UID: \"9551801b-8926-4673-942b-bcd89aa4eb7b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.582276 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.582467 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9551801b-8926-4673-942b-bcd89aa4eb7b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wjwxt\" (UID: \"9551801b-8926-4673-942b-bcd89aa4eb7b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.591940 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9551801b-8926-4673-942b-bcd89aa4eb7b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wjwxt\" (UID: \"9551801b-8926-4673-942b-bcd89aa4eb7b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.597002 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.600275 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxmwg\" (UniqueName: \"kubernetes.io/projected/9551801b-8926-4673-942b-bcd89aa4eb7b-kube-api-access-kxmwg\") pod \"ovnkube-control-plane-749d76644c-wjwxt\" (UID: \"9551801b-8926-4673-942b-bcd89aa4eb7b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.615840 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.628654 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.640288 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.640343 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 15:27:57.621947202 +0000 UTC Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.652414 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.652675 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.652770 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.652894 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.652981 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:45Z","lastTransitionTime":"2026-02-02T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.657234 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.665862 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.666047 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.665951 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:45 crc kubenswrapper[4925]: E0202 10:57:45.666585 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:57:45 crc kubenswrapper[4925]: E0202 10:57:45.666611 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:57:45 crc kubenswrapper[4925]: E0202 10:57:45.666680 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.671981 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.684935 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.702024 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.715141 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.728565 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.755921 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.755955 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.755964 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.755977 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.755985 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:45Z","lastTransitionTime":"2026-02-02T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.789712 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" Feb 02 10:57:45 crc kubenswrapper[4925]: W0202 10:57:45.806896 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9551801b_8926_4673_942b_bcd89aa4eb7b.slice/crio-d75177b8ef9cc8f4f47d3e8411c8998d6ba3464f8153329d5891915df04cc9f0 WatchSource:0}: Error finding container d75177b8ef9cc8f4f47d3e8411c8998d6ba3464f8153329d5891915df04cc9f0: Status 404 returned error can't find the container with id d75177b8ef9cc8f4f47d3e8411c8998d6ba3464f8153329d5891915df04cc9f0 Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.859857 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.859885 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.859894 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.859906 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.859914 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:45Z","lastTransitionTime":"2026-02-02T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.963425 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.963465 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.963477 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.963494 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:45 crc kubenswrapper[4925]: I0202 10:57:45.963506 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:45Z","lastTransitionTime":"2026-02-02T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.021237 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovnkube-controller/0.log" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.024552 4925 generic.go:334] "Generic (PLEG): container finished" podID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerID="9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa" exitCode=1 Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.024591 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerDied","Data":"9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa"} Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.025508 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" event={"ID":"9551801b-8926-4673-942b-bcd89aa4eb7b","Type":"ContainerStarted","Data":"d75177b8ef9cc8f4f47d3e8411c8998d6ba3464f8153329d5891915df04cc9f0"} Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.027255 4925 scope.go:117] "RemoveContainer" containerID="9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.043028 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.056721 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.066327 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.066364 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.066376 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.066393 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.066404 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:46Z","lastTransitionTime":"2026-02-02T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.068639 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.088037 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.106594 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.121508 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.138671 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.153895 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.170987 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.173507 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.173683 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.173757 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.173796 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.173828 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:46Z","lastTransitionTime":"2026-02-02T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.191343 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hjf4s"] Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.191845 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.192019 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:57:46 crc kubenswrapper[4925]: E0202 10:57:46.192906 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.210502 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.228747 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.241969 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.269458 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"message\\\":\\\":44.655544 6219 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655554 6219 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:57:44.655577 6219 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 10:57:44.655454 6219 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655778 6219 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 10:57:44.655686 6219 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.656038 6219 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655963 6219 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.656010 6219 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.276565 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.276598 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.276607 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.276623 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.276633 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:46Z","lastTransitionTime":"2026-02-02T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.289903 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcx5x\" (UniqueName: \"kubernetes.io/projected/39f183d5-0612-452e-b762-c841df3a306d-kube-api-access-hcx5x\") pod \"network-metrics-daemon-hjf4s\" (UID: \"39f183d5-0612-452e-b762-c841df3a306d\") " pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.289974 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs\") pod \"network-metrics-daemon-hjf4s\" (UID: \"39f183d5-0612-452e-b762-c841df3a306d\") " pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.292767 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.324185 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.340485 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.374230 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.378970 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.379040 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.379065 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.379146 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.379171 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:46Z","lastTransitionTime":"2026-02-02T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.391384 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs\") pod \"network-metrics-daemon-hjf4s\" (UID: \"39f183d5-0612-452e-b762-c841df3a306d\") " pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.391450 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcx5x\" (UniqueName: \"kubernetes.io/projected/39f183d5-0612-452e-b762-c841df3a306d-kube-api-access-hcx5x\") pod \"network-metrics-daemon-hjf4s\" (UID: \"39f183d5-0612-452e-b762-c841df3a306d\") " pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:57:46 crc kubenswrapper[4925]: E0202 10:57:46.391697 4925 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:57:46 crc kubenswrapper[4925]: E0202 10:57:46.391911 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs podName:39f183d5-0612-452e-b762-c841df3a306d nodeName:}" failed. No retries permitted until 2026-02-02 10:57:46.891872509 +0000 UTC m=+43.896121511 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs") pod "network-metrics-daemon-hjf4s" (UID: "39f183d5-0612-452e-b762-c841df3a306d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.394882 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.411621 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.419301 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcx5x\" (UniqueName: \"kubernetes.io/projected/39f183d5-0612-452e-b762-c841df3a306d-kube-api-access-hcx5x\") pod \"network-metrics-daemon-hjf4s\" (UID: \"39f183d5-0612-452e-b762-c841df3a306d\") " pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.427474 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.447200 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"message\\\":\\\":44.655544 6219 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655554 6219 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:57:44.655577 6219 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 10:57:44.655454 6219 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655778 6219 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 10:57:44.655686 6219 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.656038 6219 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655963 6219 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.656010 6219 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.465189 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.480830 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.482356 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.482399 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.482409 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.482424 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.482434 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:46Z","lastTransitionTime":"2026-02-02T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.497514 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.512122 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.526822 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.541681 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.555532 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.565924 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.576821 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.584384 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.584413 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.584422 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.584437 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.584448 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:46Z","lastTransitionTime":"2026-02-02T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.588779 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.601495 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:46Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.641875 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 19:38:59.768958969 +0000 UTC Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.686986 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.687045 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.687065 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.687121 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.687142 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:46Z","lastTransitionTime":"2026-02-02T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.789533 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.789568 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.789578 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.789592 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.789602 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:46Z","lastTransitionTime":"2026-02-02T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.892167 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.892220 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.892239 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.892264 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.892282 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:46Z","lastTransitionTime":"2026-02-02T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.895871 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs\") pod \"network-metrics-daemon-hjf4s\" (UID: \"39f183d5-0612-452e-b762-c841df3a306d\") " pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:57:46 crc kubenswrapper[4925]: E0202 10:57:46.896120 4925 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:57:46 crc kubenswrapper[4925]: E0202 10:57:46.896195 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs podName:39f183d5-0612-452e-b762-c841df3a306d nodeName:}" failed. No retries permitted until 2026-02-02 10:57:47.896177348 +0000 UTC m=+44.900426310 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs") pod "network-metrics-daemon-hjf4s" (UID: "39f183d5-0612-452e-b762-c841df3a306d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.994463 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.994531 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.994549 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.994573 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:46 crc kubenswrapper[4925]: I0202 10:57:46.994589 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:46Z","lastTransitionTime":"2026-02-02T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.029829 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" event={"ID":"9551801b-8926-4673-942b-bcd89aa4eb7b","Type":"ContainerStarted","Data":"ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8"} Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.097126 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.097172 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.097219 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.097238 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.097252 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:47Z","lastTransitionTime":"2026-02-02T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.199946 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.200026 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.200044 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.200069 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.200141 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:47Z","lastTransitionTime":"2026-02-02T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.302818 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.302864 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.302875 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.302889 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.302900 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:47Z","lastTransitionTime":"2026-02-02T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.404861 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.404916 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.404927 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.404941 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.404949 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:47Z","lastTransitionTime":"2026-02-02T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.502029 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:58:03.501994052 +0000 UTC m=+60.506243034 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.502291 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.502390 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.502419 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.502450 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.502476 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.502543 4925 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.502588 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:58:03.502574667 +0000 UTC m=+60.506823619 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.502601 4925 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.502670 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:58:03.502654019 +0000 UTC m=+60.506903111 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.502676 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.502703 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.502716 4925 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.502752 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:58:03.502741832 +0000 UTC m=+60.506990864 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.503136 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.503156 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.503165 4925 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.503198 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:58:03.503189904 +0000 UTC m=+60.507438856 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.507517 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.507543 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.507554 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.507568 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.507579 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:47Z","lastTransitionTime":"2026-02-02T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.610621 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.610709 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.610735 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.610767 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.610791 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:47Z","lastTransitionTime":"2026-02-02T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.642819 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:17:20.304678341 +0000 UTC Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.664102 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.664217 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.664266 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.664294 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.664290 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.664406 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.664554 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.664629 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.713265 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.713498 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.713506 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.713519 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.713529 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:47Z","lastTransitionTime":"2026-02-02T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.816955 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.816987 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.816997 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.817011 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.817021 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:47Z","lastTransitionTime":"2026-02-02T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.906615 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs\") pod \"network-metrics-daemon-hjf4s\" (UID: \"39f183d5-0612-452e-b762-c841df3a306d\") " pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.906761 4925 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:57:47 crc kubenswrapper[4925]: E0202 10:57:47.906821 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs podName:39f183d5-0612-452e-b762-c841df3a306d nodeName:}" failed. No retries permitted until 2026-02-02 10:57:49.906801836 +0000 UTC m=+46.911050798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs") pod "network-metrics-daemon-hjf4s" (UID: "39f183d5-0612-452e-b762-c841df3a306d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.920062 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.920136 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.920147 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.920160 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:47 crc kubenswrapper[4925]: I0202 10:57:47.920181 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:47Z","lastTransitionTime":"2026-02-02T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.019651 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.019687 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.019695 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.019707 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.019717 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:48Z","lastTransitionTime":"2026-02-02T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:48 crc kubenswrapper[4925]: E0202 10:57:48.031202 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.039678 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.039722 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.039735 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.039752 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.039768 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:48Z","lastTransitionTime":"2026-02-02T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.042459 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovnkube-controller/0.log" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.045151 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerStarted","Data":"acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d"} Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.045269 4925 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.046833 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" event={"ID":"9551801b-8926-4673-942b-bcd89aa4eb7b","Type":"ContainerStarted","Data":"f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa"} Feb 02 10:57:48 crc kubenswrapper[4925]: E0202 10:57:48.051641 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.054785 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.054811 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.054820 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.054834 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.054842 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:48Z","lastTransitionTime":"2026-02-02T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.057111 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: E0202 10:57:48.069112 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.072316 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.072358 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.072371 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.072388 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.072400 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:48Z","lastTransitionTime":"2026-02-02T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.076327 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: E0202 10:57:48.085044 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.087762 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.087802 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.087817 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.087833 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.087844 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:48Z","lastTransitionTime":"2026-02-02T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.089104 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: E0202 10:57:48.098209 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: E0202 10:57:48.098366 4925 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.099640 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.099673 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.099686 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.099701 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.099712 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:48Z","lastTransitionTime":"2026-02-02T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.104984 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.116023 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.132391 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"message\\\":\\\":44.655544 6219 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655554 6219 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:57:44.655577 6219 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 10:57:44.655454 6219 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655778 6219 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 10:57:44.655686 6219 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.656038 6219 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655963 6219 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.656010 6219 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.145269 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.156137 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.171949 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.182364 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.192135 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.200468 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.201901 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.201938 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.201947 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.201961 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.201969 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:48Z","lastTransitionTime":"2026-02-02T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.220704 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.233447 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.242406 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.252518 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.263310 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.274981 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.286202 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.301324 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.303858 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.303898 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.303910 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.303926 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.303937 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:48Z","lastTransitionTime":"2026-02-02T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.319223 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.327823 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.338226 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.352035 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.364520 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.373744 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.387098 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.398103 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.405921 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.405954 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.405962 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.405974 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.405982 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:48Z","lastTransitionTime":"2026-02-02T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.418489 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"message\\\":\\\":44.655544 6219 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655554 6219 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:57:44.655577 6219 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 10:57:44.655454 6219 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655778 6219 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 10:57:44.655686 6219 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.656038 6219 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655963 6219 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.656010 6219 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.435381 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.456053 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.469802 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.480914 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.489946 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.507517 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.507550 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.507559 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.507572 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.507583 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:48Z","lastTransitionTime":"2026-02-02T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.610179 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.610216 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.610229 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.610243 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.610254 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:48Z","lastTransitionTime":"2026-02-02T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.643037 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 15:33:25.029260245 +0000 UTC Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.713590 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.713638 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.713647 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.713659 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.713669 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:48Z","lastTransitionTime":"2026-02-02T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.816072 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.816165 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.816184 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.816206 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.816221 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:48Z","lastTransitionTime":"2026-02-02T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.918883 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.919000 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.919020 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.919055 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:48 crc kubenswrapper[4925]: I0202 10:57:48.919102 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:48Z","lastTransitionTime":"2026-02-02T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.022177 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.022256 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.022279 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.022313 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.022333 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:49Z","lastTransitionTime":"2026-02-02T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.055270 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovnkube-controller/1.log" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.056369 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovnkube-controller/0.log" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.061474 4925 generic.go:334] "Generic (PLEG): container finished" podID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerID="acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d" exitCode=1 Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.061549 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerDied","Data":"acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d"} Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.061668 4925 scope.go:117] "RemoveContainer" containerID="9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.063276 4925 scope.go:117] "RemoveContainer" containerID="acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d" Feb 02 10:57:49 crc kubenswrapper[4925]: E0202 10:57:49.063602 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.085040 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.105976 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.122191 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.125582 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.125646 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.125670 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.125700 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.125722 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:49Z","lastTransitionTime":"2026-02-02T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.139392 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.155964 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.176868 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.195907 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.213936 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.229703 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.229771 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.229787 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.229811 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.229828 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:49Z","lastTransitionTime":"2026-02-02T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.233294 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.250700 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.266923 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.287849 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.323601 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.334895 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.335000 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.335030 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.335063 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.335116 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:49Z","lastTransitionTime":"2026-02-02T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.346864 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.361320 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.372631 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.397148 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"message\\\":\\\":44.655544 6219 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655554 6219 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:57:44.655577 6219 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 10:57:44.655454 6219 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655778 6219 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 10:57:44.655686 6219 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.656038 6219 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655963 6219 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.656010 6219 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"p:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:57:48.354301 6413 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:57:48.354897 6413 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0202 10:57:48.354303 6413 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0202 10:57:48.354908 6413 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0202 10:57:48.354919 6413 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0202 10:57:48.354945 6413 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0202 10:57:48.354958 6413 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:49Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.437576 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.437635 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.437648 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.437664 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.437675 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:49Z","lastTransitionTime":"2026-02-02T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.540035 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.540111 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.540128 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.540149 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.540172 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:49Z","lastTransitionTime":"2026-02-02T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.641823 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.641875 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.641901 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.641929 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.641953 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:49Z","lastTransitionTime":"2026-02-02T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.644201 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 13:22:43.713071068 +0000 UTC Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.664012 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.664026 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.664048 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.664038 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:49 crc kubenswrapper[4925]: E0202 10:57:49.664114 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:57:49 crc kubenswrapper[4925]: E0202 10:57:49.664253 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:57:49 crc kubenswrapper[4925]: E0202 10:57:49.664430 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:57:49 crc kubenswrapper[4925]: E0202 10:57:49.664613 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.744291 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.744340 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.744350 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.744372 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.744385 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:49Z","lastTransitionTime":"2026-02-02T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.847152 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.847195 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.847206 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.847222 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.847232 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:49Z","lastTransitionTime":"2026-02-02T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.926273 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs\") pod \"network-metrics-daemon-hjf4s\" (UID: \"39f183d5-0612-452e-b762-c841df3a306d\") " pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:57:49 crc kubenswrapper[4925]: E0202 10:57:49.926666 4925 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:57:49 crc kubenswrapper[4925]: E0202 10:57:49.926775 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs podName:39f183d5-0612-452e-b762-c841df3a306d nodeName:}" failed. No retries permitted until 2026-02-02 10:57:53.926747381 +0000 UTC m=+50.930996383 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs") pod "network-metrics-daemon-hjf4s" (UID: "39f183d5-0612-452e-b762-c841df3a306d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.949903 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.949953 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.949965 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.949983 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:49 crc kubenswrapper[4925]: I0202 10:57:49.949995 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:49Z","lastTransitionTime":"2026-02-02T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.052741 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.052775 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.052783 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.052795 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.052803 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:50Z","lastTransitionTime":"2026-02-02T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.066591 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovnkube-controller/1.log" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.156490 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.156537 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.156548 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.156566 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.156577 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:50Z","lastTransitionTime":"2026-02-02T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.258879 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.258934 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.258946 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.258962 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.258974 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:50Z","lastTransitionTime":"2026-02-02T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.362106 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.362142 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.362155 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.362170 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.362184 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:50Z","lastTransitionTime":"2026-02-02T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.464927 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.465305 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.465319 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.465334 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.465344 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:50Z","lastTransitionTime":"2026-02-02T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.568432 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.568481 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.568494 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.568513 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.568525 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:50Z","lastTransitionTime":"2026-02-02T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.644913 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:26:21.973716003 +0000 UTC Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.672534 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.672596 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.672615 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.672642 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.672660 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:50Z","lastTransitionTime":"2026-02-02T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.775793 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.775835 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.775844 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.775858 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.775868 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:50Z","lastTransitionTime":"2026-02-02T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.878787 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.878869 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.878895 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.878926 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.878949 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:50Z","lastTransitionTime":"2026-02-02T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.981566 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.981626 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.981644 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.981670 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:50 crc kubenswrapper[4925]: I0202 10:57:50.981681 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:50Z","lastTransitionTime":"2026-02-02T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.085784 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.085830 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.085842 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.085860 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.085871 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:51Z","lastTransitionTime":"2026-02-02T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.189235 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.189345 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.189366 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.189392 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.189411 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:51Z","lastTransitionTime":"2026-02-02T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.292912 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.292979 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.292997 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.293022 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.293040 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:51Z","lastTransitionTime":"2026-02-02T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.396510 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.396579 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.396598 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.396624 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.396642 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:51Z","lastTransitionTime":"2026-02-02T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.500044 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.500175 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.500209 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.500239 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.500259 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:51Z","lastTransitionTime":"2026-02-02T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.604429 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.604499 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.604518 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.604543 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.604562 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:51Z","lastTransitionTime":"2026-02-02T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.645189 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 21:38:24.061792719 +0000 UTC Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.663796 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.663965 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:57:51 crc kubenswrapper[4925]: E0202 10:57:51.663973 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.663808 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:51 crc kubenswrapper[4925]: E0202 10:57:51.664141 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.664217 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:51 crc kubenswrapper[4925]: E0202 10:57:51.664298 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:57:51 crc kubenswrapper[4925]: E0202 10:57:51.664354 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.664880 4925 scope.go:117] "RemoveContainer" containerID="3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.709543 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.709588 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.709604 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.709626 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.709655 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:51Z","lastTransitionTime":"2026-02-02T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.812399 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.812454 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.812468 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.812485 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.812496 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:51Z","lastTransitionTime":"2026-02-02T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.915434 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.915497 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.915514 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.915538 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:51 crc kubenswrapper[4925]: I0202 10:57:51.915555 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:51Z","lastTransitionTime":"2026-02-02T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.020362 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.020416 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.020428 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.020444 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.020477 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:52Z","lastTransitionTime":"2026-02-02T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.081180 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.083640 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0"} Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.084062 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.102060 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.114833 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.122786 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.122822 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.122833 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.122848 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.122862 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:52Z","lastTransitionTime":"2026-02-02T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.129568 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.143816 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.163027 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.175788 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.189206 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.202232 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.216278 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.225714 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.225765 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.225776 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.225792 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.225805 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:52Z","lastTransitionTime":"2026-02-02T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.236228 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026511aa3813fd5229e6337f5eb25299f300c2e806225361711136873ae00aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"message\\\":\\\":44.655544 6219 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655554 6219 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 10:57:44.655577 6219 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 10:57:44.655454 6219 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655778 6219 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 10:57:44.655686 6219 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.656038 6219 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.655963 6219 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:57:44.656010 6219 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"p:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:57:48.354301 6413 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:57:48.354897 6413 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0202 10:57:48.354303 6413 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0202 10:57:48.354908 6413 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0202 10:57:48.354919 6413 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0202 10:57:48.354945 6413 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0202 10:57:48.354958 6413 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.255660 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.280624 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.291302 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.302474 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.316732 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.328313 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.328347 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.328356 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.328369 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.328378 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:52Z","lastTransitionTime":"2026-02-02T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.333998 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.349370 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.430690 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.430735 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.430746 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.430761 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.430772 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:52Z","lastTransitionTime":"2026-02-02T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.532962 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.533002 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.533012 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.533024 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.533033 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:52Z","lastTransitionTime":"2026-02-02T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.540523 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.543303 4925 scope.go:117] "RemoveContainer" containerID="acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d" Feb 02 10:57:52 crc kubenswrapper[4925]: E0202 10:57:52.543946 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.566913 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.583008 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.599271 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.626109 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"p:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:57:48.354301 6413 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:57:48.354897 6413 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0202 10:57:48.354303 6413 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0202 10:57:48.354908 6413 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0202 10:57:48.354919 6413 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0202 10:57:48.354945 6413 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0202 10:57:48.354958 6413 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.635289 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.635363 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.635373 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.635386 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.635394 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:52Z","lastTransitionTime":"2026-02-02T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.645694 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:25:23.576665902 +0000 UTC Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.649599 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.678712 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.695205 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.709882 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.725288 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.737899 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.737929 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.737939 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.737953 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.737963 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:52Z","lastTransitionTime":"2026-02-02T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.743345 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.760194 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.777780 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.794245 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.812630 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.838839 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.840498 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.840538 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.840555 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.840580 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.840597 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:52Z","lastTransitionTime":"2026-02-02T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.856277 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.870672 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:52Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.942684 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.942725 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.942737 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.942756 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:52 crc kubenswrapper[4925]: I0202 10:57:52.942769 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:52Z","lastTransitionTime":"2026-02-02T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.046373 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.046477 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.046497 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.046520 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.046537 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:53Z","lastTransitionTime":"2026-02-02T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.150571 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.150625 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.150644 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.150671 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.150689 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:53Z","lastTransitionTime":"2026-02-02T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.252873 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.252918 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.252928 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.252941 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.252951 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:53Z","lastTransitionTime":"2026-02-02T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.355884 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.355953 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.355975 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.355999 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.356016 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:53Z","lastTransitionTime":"2026-02-02T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.458762 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.458794 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.458804 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.458820 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.458829 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:53Z","lastTransitionTime":"2026-02-02T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.561690 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.561726 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.561737 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.561753 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.561764 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:53Z","lastTransitionTime":"2026-02-02T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.646714 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:05:59.482798014 +0000 UTC Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.663706 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.663812 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.663879 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.663921 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.664036 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.664059 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.664071 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.664112 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.664131 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:53Z","lastTransitionTime":"2026-02-02T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:53 crc kubenswrapper[4925]: E0202 10:57:53.664525 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:57:53 crc kubenswrapper[4925]: E0202 10:57:53.664528 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:57:53 crc kubenswrapper[4925]: E0202 10:57:53.664533 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:57:53 crc kubenswrapper[4925]: E0202 10:57:53.664608 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.767272 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.767326 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.767340 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.767358 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.767371 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:53Z","lastTransitionTime":"2026-02-02T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.869834 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.869869 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.869878 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.869891 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.869901 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:53Z","lastTransitionTime":"2026-02-02T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.972500 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.972549 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.972565 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.972586 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.972599 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:53Z","lastTransitionTime":"2026-02-02T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:53 crc kubenswrapper[4925]: I0202 10:57:53.974883 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs\") pod \"network-metrics-daemon-hjf4s\" (UID: \"39f183d5-0612-452e-b762-c841df3a306d\") " pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:57:53 crc kubenswrapper[4925]: E0202 10:57:53.975058 4925 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:57:53 crc kubenswrapper[4925]: E0202 10:57:53.975138 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs podName:39f183d5-0612-452e-b762-c841df3a306d nodeName:}" failed. No retries permitted until 2026-02-02 10:58:01.975120809 +0000 UTC m=+58.979369771 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs") pod "network-metrics-daemon-hjf4s" (UID: "39f183d5-0612-452e-b762-c841df3a306d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.075587 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.075666 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.075691 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.075715 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.075733 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:54Z","lastTransitionTime":"2026-02-02T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.178241 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.178326 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.178354 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.178384 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.178407 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:54Z","lastTransitionTime":"2026-02-02T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.285147 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.285181 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.285190 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.285204 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.285214 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:54Z","lastTransitionTime":"2026-02-02T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.387580 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.387655 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.387673 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.388142 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.388243 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:54Z","lastTransitionTime":"2026-02-02T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.491282 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.491331 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.491348 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.491370 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.491386 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:54Z","lastTransitionTime":"2026-02-02T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.593885 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.593948 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.593971 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.594000 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.594021 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:54Z","lastTransitionTime":"2026-02-02T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.647621 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 11:05:02.900162474 +0000 UTC Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.680216 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:54Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.692666 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:54Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.696904 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.696932 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.696942 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.696972 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.696982 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:54Z","lastTransitionTime":"2026-02-02T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.704455 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:54Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.717421 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:54Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.736262 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:54Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.751617 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:54Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.777711 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:54Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.800586 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.800658 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.800682 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.800712 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.800735 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:54Z","lastTransitionTime":"2026-02-02T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.804381 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:54Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.819761 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:54Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.832536 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:54Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.854141 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"p:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:57:48.354301 6413 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:57:48.354897 6413 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0202 10:57:48.354303 6413 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0202 10:57:48.354908 6413 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0202 10:57:48.354919 6413 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0202 10:57:48.354945 6413 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0202 10:57:48.354958 6413 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:54Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.869987 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:54Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.882988 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:54Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.899316 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:54Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.903133 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.903186 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.903199 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.903216 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.903229 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:54Z","lastTransitionTime":"2026-02-02T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.910338 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:54Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.922311 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:54Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:54 crc kubenswrapper[4925]: I0202 10:57:54.934911 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:54Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.006124 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.006180 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.006197 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.006222 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.006238 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:55Z","lastTransitionTime":"2026-02-02T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.109503 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.109584 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.110237 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.110278 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.110303 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:55Z","lastTransitionTime":"2026-02-02T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.217232 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.217300 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.217323 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.217351 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.217373 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:55Z","lastTransitionTime":"2026-02-02T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.319739 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.319795 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.319815 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.319877 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.319895 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:55Z","lastTransitionTime":"2026-02-02T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.422646 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.422722 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.422747 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.422776 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.422797 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:55Z","lastTransitionTime":"2026-02-02T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.526030 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.526300 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.526334 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.526365 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.526387 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:55Z","lastTransitionTime":"2026-02-02T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.629751 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.629927 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.629942 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.629958 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.629970 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:55Z","lastTransitionTime":"2026-02-02T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.648421 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:48:49.514670659 +0000 UTC Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.663577 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.663704 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.663755 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.663581 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:55 crc kubenswrapper[4925]: E0202 10:57:55.664021 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:57:55 crc kubenswrapper[4925]: E0202 10:57:55.664190 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:57:55 crc kubenswrapper[4925]: E0202 10:57:55.664287 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:57:55 crc kubenswrapper[4925]: E0202 10:57:55.664360 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.733011 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.733066 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.733111 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.733136 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.733156 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:55Z","lastTransitionTime":"2026-02-02T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.836264 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.836348 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.836373 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.836401 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.836423 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:55Z","lastTransitionTime":"2026-02-02T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.940108 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.940154 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.940166 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.940183 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:55 crc kubenswrapper[4925]: I0202 10:57:55.940195 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:55Z","lastTransitionTime":"2026-02-02T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.043605 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.043711 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.043728 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.043753 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.043770 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:56Z","lastTransitionTime":"2026-02-02T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.146326 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.146601 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.146713 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.146803 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.146883 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:56Z","lastTransitionTime":"2026-02-02T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.249062 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.249119 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.249131 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.249147 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.249160 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:56Z","lastTransitionTime":"2026-02-02T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.351646 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.351932 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.352003 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.352092 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.352160 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:56Z","lastTransitionTime":"2026-02-02T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.454658 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.454967 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.455226 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.455625 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.455832 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:56Z","lastTransitionTime":"2026-02-02T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.559853 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.559909 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.559929 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.559954 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.559980 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:56Z","lastTransitionTime":"2026-02-02T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.648896 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 02:40:03.046475355 +0000 UTC Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.663109 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.663169 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.663180 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.663199 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.663211 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:56Z","lastTransitionTime":"2026-02-02T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.766449 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.766499 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.766513 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.766532 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.766557 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:56Z","lastTransitionTime":"2026-02-02T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.869856 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.869938 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.869953 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.869971 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.870027 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:56Z","lastTransitionTime":"2026-02-02T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.989384 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.989432 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.989441 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.989455 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:56 crc kubenswrapper[4925]: I0202 10:57:56.989466 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:56Z","lastTransitionTime":"2026-02-02T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.092271 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.092323 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.092334 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.092347 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.092358 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:57Z","lastTransitionTime":"2026-02-02T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.196177 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.196228 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.196239 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.196259 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.196272 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:57Z","lastTransitionTime":"2026-02-02T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.299663 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.299748 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.299762 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.299793 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.299803 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:57Z","lastTransitionTime":"2026-02-02T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.401922 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.401950 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.401960 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.401972 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.401981 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:57Z","lastTransitionTime":"2026-02-02T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.505058 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.505171 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.505190 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.505214 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.505231 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:57Z","lastTransitionTime":"2026-02-02T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.607991 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.608054 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.608071 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.608133 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.608149 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:57Z","lastTransitionTime":"2026-02-02T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.650141 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 11:28:33.217306392 +0000 UTC Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.663749 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.663811 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.663756 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:57 crc kubenswrapper[4925]: E0202 10:57:57.664152 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.664186 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:57 crc kubenswrapper[4925]: E0202 10:57:57.664333 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:57:57 crc kubenswrapper[4925]: E0202 10:57:57.664447 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:57:57 crc kubenswrapper[4925]: E0202 10:57:57.664530 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.710914 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.710964 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.710980 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.711005 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.711022 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:57Z","lastTransitionTime":"2026-02-02T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.814893 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.814940 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.814950 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.814964 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.814973 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:57Z","lastTransitionTime":"2026-02-02T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.918375 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.918411 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.918423 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.918437 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:57 crc kubenswrapper[4925]: I0202 10:57:57.918449 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:57Z","lastTransitionTime":"2026-02-02T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.021750 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.021971 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.021986 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.022005 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.022019 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:58Z","lastTransitionTime":"2026-02-02T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.125048 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.125152 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.125176 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.125201 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.125217 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:58Z","lastTransitionTime":"2026-02-02T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.200726 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.200776 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.200794 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.200816 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.200832 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:58Z","lastTransitionTime":"2026-02-02T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:58 crc kubenswrapper[4925]: E0202 10:57:58.219950 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.225488 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.225559 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.225583 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.225613 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.225643 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:58Z","lastTransitionTime":"2026-02-02T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:58 crc kubenswrapper[4925]: E0202 10:57:58.245413 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.249308 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.249359 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.249371 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.249387 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.249400 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:58Z","lastTransitionTime":"2026-02-02T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:58 crc kubenswrapper[4925]: E0202 10:57:58.266732 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.270324 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.270349 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.270360 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.270377 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.270388 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:58Z","lastTransitionTime":"2026-02-02T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:58 crc kubenswrapper[4925]: E0202 10:57:58.287528 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.291330 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.291385 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.291396 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.291411 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.291422 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:58Z","lastTransitionTime":"2026-02-02T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:58 crc kubenswrapper[4925]: E0202 10:57:58.305216 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:57:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:57:58 crc kubenswrapper[4925]: E0202 10:57:58.305461 4925 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.307017 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.307050 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.307119 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.307140 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.307153 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:58Z","lastTransitionTime":"2026-02-02T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.413803 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.413960 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.413989 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.414024 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.414058 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:58Z","lastTransitionTime":"2026-02-02T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.517766 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.517813 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.517826 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.517845 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.517858 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:58Z","lastTransitionTime":"2026-02-02T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.620179 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.620238 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.620259 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.620283 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.620303 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:58Z","lastTransitionTime":"2026-02-02T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.650857 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:12:11.415210632 +0000 UTC Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.722452 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.722513 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.722532 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.722559 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.722579 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:58Z","lastTransitionTime":"2026-02-02T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.825601 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.825629 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.825638 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.825651 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.825659 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:58Z","lastTransitionTime":"2026-02-02T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.928097 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.928164 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.928183 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.928607 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:58 crc kubenswrapper[4925]: I0202 10:57:58.928660 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:58Z","lastTransitionTime":"2026-02-02T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.032867 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.032904 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.032913 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.032926 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.032937 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:59Z","lastTransitionTime":"2026-02-02T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.135961 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.136018 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.136036 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.136060 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.136109 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:59Z","lastTransitionTime":"2026-02-02T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.240239 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.240288 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.240305 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.240327 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.240344 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:59Z","lastTransitionTime":"2026-02-02T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.343749 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.343807 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.343826 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.343850 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.343868 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:59Z","lastTransitionTime":"2026-02-02T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.447141 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.447211 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.447238 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.447265 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.447286 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:59Z","lastTransitionTime":"2026-02-02T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.550195 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.550265 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.550285 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.550311 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.550332 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:59Z","lastTransitionTime":"2026-02-02T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.651302 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:04:19.324126025 +0000 UTC Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.652804 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.652874 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.652893 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.652921 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.652939 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:59Z","lastTransitionTime":"2026-02-02T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.663290 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.663300 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.663353 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.663457 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:57:59 crc kubenswrapper[4925]: E0202 10:57:59.663606 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:57:59 crc kubenswrapper[4925]: E0202 10:57:59.663750 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:57:59 crc kubenswrapper[4925]: E0202 10:57:59.663980 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:57:59 crc kubenswrapper[4925]: E0202 10:57:59.664100 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.756634 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.756725 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.756747 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.756776 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.756867 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:59Z","lastTransitionTime":"2026-02-02T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.861252 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.861332 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.861358 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.861390 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.861414 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:59Z","lastTransitionTime":"2026-02-02T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.964284 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.964347 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.964369 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.964397 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:57:59 crc kubenswrapper[4925]: I0202 10:57:59.964416 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:57:59Z","lastTransitionTime":"2026-02-02T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.067536 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.067597 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.067620 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.067648 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.067666 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:00Z","lastTransitionTime":"2026-02-02T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.170148 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.170182 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.170191 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.170202 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.170211 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:00Z","lastTransitionTime":"2026-02-02T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.272372 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.272746 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.272895 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.273053 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.273233 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:00Z","lastTransitionTime":"2026-02-02T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.376360 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.376735 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.376872 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.377004 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.377230 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:00Z","lastTransitionTime":"2026-02-02T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.480652 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.480943 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.481067 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.481298 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.481477 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:00Z","lastTransitionTime":"2026-02-02T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.584646 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.585223 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.585305 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.585329 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.585346 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:00Z","lastTransitionTime":"2026-02-02T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.652663 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:43:08.153328753 +0000 UTC Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.687152 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.687449 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.687587 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.687710 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.687853 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:00Z","lastTransitionTime":"2026-02-02T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.791974 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.792030 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.792051 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.792110 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.792133 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:00Z","lastTransitionTime":"2026-02-02T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.896014 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.896171 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.896212 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.896243 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.896269 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:00Z","lastTransitionTime":"2026-02-02T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.999532 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.999565 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.999574 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.999588 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:00 crc kubenswrapper[4925]: I0202 10:58:00.999598 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:00Z","lastTransitionTime":"2026-02-02T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.101928 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.101989 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.102009 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.102032 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.102049 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:01Z","lastTransitionTime":"2026-02-02T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.204516 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.204563 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.204575 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.204596 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.204607 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:01Z","lastTransitionTime":"2026-02-02T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.308107 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.308158 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.308174 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.308193 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.308203 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:01Z","lastTransitionTime":"2026-02-02T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.411181 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.411249 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.411270 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.411295 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.411311 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:01Z","lastTransitionTime":"2026-02-02T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.513307 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.513345 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.513357 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.513372 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.513383 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:01Z","lastTransitionTime":"2026-02-02T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.616984 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.617114 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.617133 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.617158 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.617177 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:01Z","lastTransitionTime":"2026-02-02T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.653518 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 01:05:09.428638241 +0000 UTC Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.664098 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.664098 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.664160 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.664174 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:01 crc kubenswrapper[4925]: E0202 10:58:01.664323 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:01 crc kubenswrapper[4925]: E0202 10:58:01.664442 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:01 crc kubenswrapper[4925]: E0202 10:58:01.664531 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:01 crc kubenswrapper[4925]: E0202 10:58:01.664615 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.719066 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.719121 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.719132 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.719146 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.719156 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:01Z","lastTransitionTime":"2026-02-02T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.821565 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.821598 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.821606 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.821620 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.821631 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:01Z","lastTransitionTime":"2026-02-02T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.924131 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.924457 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.924536 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.924611 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:01 crc kubenswrapper[4925]: I0202 10:58:01.924680 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:01Z","lastTransitionTime":"2026-02-02T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.027719 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.027814 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.027826 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.027881 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.027895 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:02Z","lastTransitionTime":"2026-02-02T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.057575 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs\") pod \"network-metrics-daemon-hjf4s\" (UID: \"39f183d5-0612-452e-b762-c841df3a306d\") " pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:02 crc kubenswrapper[4925]: E0202 10:58:02.057710 4925 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:58:02 crc kubenswrapper[4925]: E0202 10:58:02.057768 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs podName:39f183d5-0612-452e-b762-c841df3a306d nodeName:}" failed. No retries permitted until 2026-02-02 10:58:18.057753172 +0000 UTC m=+75.062002134 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs") pod "network-metrics-daemon-hjf4s" (UID: "39f183d5-0612-452e-b762-c841df3a306d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.130729 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.130804 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.130825 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.130850 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.130869 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:02Z","lastTransitionTime":"2026-02-02T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.233361 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.233409 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.233425 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.233446 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.233459 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:02Z","lastTransitionTime":"2026-02-02T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.335953 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.335991 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.336003 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.336020 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.336033 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:02Z","lastTransitionTime":"2026-02-02T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.438462 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.438508 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.438518 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.438532 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.438543 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:02Z","lastTransitionTime":"2026-02-02T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.542976 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.543046 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.543071 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.543152 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.543175 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:02Z","lastTransitionTime":"2026-02-02T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.631815 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.646821 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.648923 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.648982 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.649005 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.649034 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.649051 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:02Z","lastTransitionTime":"2026-02-02T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.654350 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 08:46:07.162551979 +0000 UTC Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.665367 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.686768 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.697557 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.707007 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.728662 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"p:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:57:48.354301 6413 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:57:48.354897 6413 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0202 10:57:48.354303 6413 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0202 10:57:48.354908 6413 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0202 10:57:48.354919 6413 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0202 10:57:48.354945 6413 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0202 10:57:48.354958 6413 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.744908 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.751618 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.751649 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.751659 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.751673 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.751683 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:02Z","lastTransitionTime":"2026-02-02T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.759648 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.773601 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.783010 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.794225 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.804740 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.824219 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.843308 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.854243 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.854285 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.854300 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.854320 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.854334 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:02Z","lastTransitionTime":"2026-02-02T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.857801 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.875048 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.888526 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.900701 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.957584 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.957642 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.957664 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.957693 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:02 crc kubenswrapper[4925]: I0202 10:58:02.957713 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:02Z","lastTransitionTime":"2026-02-02T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.061396 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.061461 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.061489 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.061518 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.061538 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:03Z","lastTransitionTime":"2026-02-02T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.164018 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.164438 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.164583 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.164717 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.165016 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:03Z","lastTransitionTime":"2026-02-02T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.268974 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.269022 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.269043 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.269071 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.269125 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:03Z","lastTransitionTime":"2026-02-02T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.373059 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.373205 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.373223 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.373247 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.373265 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:03Z","lastTransitionTime":"2026-02-02T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.476641 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.476977 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.477203 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.477409 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.477583 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:03Z","lastTransitionTime":"2026-02-02T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.576181 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:58:03 crc kubenswrapper[4925]: E0202 10:58:03.576434 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:58:35.576399469 +0000 UTC m=+92.580648461 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.576509 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.576555 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.576602 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.576644 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:03 crc kubenswrapper[4925]: E0202 10:58:03.576730 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:58:03 crc kubenswrapper[4925]: E0202 10:58:03.576762 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:58:03 crc kubenswrapper[4925]: E0202 10:58:03.576779 4925 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:58:03 crc kubenswrapper[4925]: E0202 10:58:03.576783 4925 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:58:03 crc kubenswrapper[4925]: E0202 10:58:03.576802 4925 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:58:03 crc kubenswrapper[4925]: E0202 10:58:03.576848 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:58:35.576833051 +0000 UTC m=+92.581082053 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:58:03 crc kubenswrapper[4925]: E0202 10:58:03.576874 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:58:35.576861452 +0000 UTC m=+92.581110444 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:58:03 crc kubenswrapper[4925]: E0202 10:58:03.576942 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:58:35.576906523 +0000 UTC m=+92.581155575 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:58:03 crc kubenswrapper[4925]: E0202 10:58:03.577115 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:58:03 crc kubenswrapper[4925]: E0202 10:58:03.577164 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:58:03 crc kubenswrapper[4925]: E0202 10:58:03.577189 4925 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:58:03 crc kubenswrapper[4925]: E0202 10:58:03.577300 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:58:35.577272873 +0000 UTC m=+92.581521915 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.581381 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.581443 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.581469 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.581500 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.581523 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:03Z","lastTransitionTime":"2026-02-02T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.655148 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:46:57.297849793 +0000 UTC Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.663610 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.663707 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:03 crc kubenswrapper[4925]: E0202 10:58:03.663831 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.663613 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.663874 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:03 crc kubenswrapper[4925]: E0202 10:58:03.664007 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:03 crc kubenswrapper[4925]: E0202 10:58:03.664191 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:03 crc kubenswrapper[4925]: E0202 10:58:03.664329 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.665519 4925 scope.go:117] "RemoveContainer" containerID="acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.685003 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.685068 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.685122 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.685154 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.685177 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:03Z","lastTransitionTime":"2026-02-02T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.789232 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.789602 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.789621 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.789647 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.789666 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:03Z","lastTransitionTime":"2026-02-02T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.892152 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.892185 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.892197 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.892213 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.892224 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:03Z","lastTransitionTime":"2026-02-02T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.995400 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.995462 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.995481 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.995505 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:03 crc kubenswrapper[4925]: I0202 10:58:03.995522 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:03Z","lastTransitionTime":"2026-02-02T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.097455 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.097514 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.097527 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.097541 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.097550 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:04Z","lastTransitionTime":"2026-02-02T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.140040 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovnkube-controller/1.log" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.143479 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerStarted","Data":"5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20"} Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.145095 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.160632 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.240564 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"p:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:57:48.354301 6413 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:57:48.354897 6413 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0202 10:57:48.354303 6413 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0202 10:57:48.354908 6413 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0202 10:57:48.354919 6413 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0202 10:57:48.354945 6413 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0202 10:57:48.354958 6413 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:58:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.241990 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.242023 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.242035 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.242052 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.242063 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:04Z","lastTransitionTime":"2026-02-02T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.261538 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.283531 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.299716 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.311237 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.322495 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.334441 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.343987 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.344021 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.344030 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.344042 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.344053 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:04Z","lastTransitionTime":"2026-02-02T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.345784 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.354997 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.366216 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.381120 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.393906 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.404332 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7cbcd3c-f8cf-4f96-98aa-014a785a8924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff7ae920a42d3188ef7bcd99aa3c4bd344f55fd90a9ae9b95411db6b6d30de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139493bf9b644468f00ef7346d25ede753332f6401fb46c8ea3d5118de8fbdaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6be1a1c8fa8650db2277393fecfd53a6d3dac682ec792eddf1aea329fcf56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.419208 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.430825 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.441912 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.445628 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.445667 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.445678 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.445694 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.445706 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:04Z","lastTransitionTime":"2026-02-02T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.451503 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.548376 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.548427 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.548440 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.548457 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.548468 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:04Z","lastTransitionTime":"2026-02-02T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.650334 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.650364 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.650373 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.650386 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.650394 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:04Z","lastTransitionTime":"2026-02-02T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.655396 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:16:35.9083893 +0000 UTC Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.681423 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.693214 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.701856 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.711797 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.723347 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.733576 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7cbcd3c-f8cf-4f96-98aa-014a785a8924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff7ae920a42d3188ef7bcd99aa3c4bd344f55fd90a9ae9b95411db6b6d30de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139493bf9b644468f00ef7346d25ede753332f6401fb46c8ea3d5118de8fbdaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6be1a1c8fa8650db2277393fecfd53a6d3dac682ec792eddf1aea329fcf56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.743295 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.752825 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.752856 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.752867 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.752881 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.752893 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:04Z","lastTransitionTime":"2026-02-02T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.761055 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.774267 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.785162 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.793588 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.810063 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"p:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:57:48.354301 6413 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:57:48.354897 6413 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0202 10:57:48.354303 6413 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0202 10:57:48.354908 6413 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0202 10:57:48.354919 6413 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0202 10:57:48.354945 6413 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0202 10:57:48.354958 6413 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:58:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.827735 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.840024 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.850890 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.855424 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.855483 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.855498 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.855515 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.855527 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:04Z","lastTransitionTime":"2026-02-02T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.861588 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.871860 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.887291 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.958155 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.958209 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.958226 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.958248 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:04 crc kubenswrapper[4925]: I0202 10:58:04.958265 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:04Z","lastTransitionTime":"2026-02-02T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.060733 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.060814 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.060840 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.060871 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.060895 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:05Z","lastTransitionTime":"2026-02-02T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.149904 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovnkube-controller/2.log" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.150934 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovnkube-controller/1.log" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.155589 4925 generic.go:334] "Generic (PLEG): container finished" podID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerID="5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20" exitCode=1 Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.155655 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerDied","Data":"5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20"} Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.155731 4925 scope.go:117] "RemoveContainer" containerID="acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.156703 4925 scope.go:117] "RemoveContainer" containerID="5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20" Feb 02 10:58:05 crc kubenswrapper[4925]: E0202 10:58:05.156987 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.165942 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.166024 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.166041 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.166063 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.166107 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:05Z","lastTransitionTime":"2026-02-02T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.175971 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.198036 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.226982 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.246202 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.264097 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.269162 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.269236 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.269272 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.269289 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.269301 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:05Z","lastTransitionTime":"2026-02-02T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.277162 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.299273 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acc093547ef80b595e79373e2e743e34c0ba9e114de78fb48e420c679642f31d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"message\\\":\\\"p:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:57:48.354301 6413 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 10:57:48.354897 6413 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0202 10:57:48.354303 6413 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0202 10:57:48.354908 6413 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0202 10:57:48.354919 6413 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0202 10:57:48.354945 6413 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0202 10:57:48.354958 6413 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:04Z\\\",\\\"message\\\":\\\"e.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0065992f7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0202 10:58:04.643008 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:58:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.316281 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.336370 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.350259 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.363827 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.372364 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.372437 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.372452 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.372469 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.372480 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:05Z","lastTransitionTime":"2026-02-02T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.375095 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.386717 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7cbcd3c-f8cf-4f96-98aa-014a785a8924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff7ae920a42d3188ef7bcd99aa3c4bd344f55fd90a9ae9b95411db6b6d30de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139493bf9b644468f00ef7346d25ede753332f6401fb46c8ea3d5118de8fbdaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6be1a1c8fa8650db2277393fecfd53a6d3dac682ec792eddf1aea329fcf56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.400746 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.413366 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.425997 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.441998 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.460199 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.474867 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.474920 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.474933 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.474953 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.474966 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:05Z","lastTransitionTime":"2026-02-02T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.577742 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.577799 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.577815 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.577836 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.577850 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:05Z","lastTransitionTime":"2026-02-02T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.655877 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 02:09:06.166776401 +0000 UTC Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.664252 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.664257 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:05 crc kubenswrapper[4925]: E0202 10:58:05.664394 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.664452 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.664491 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:05 crc kubenswrapper[4925]: E0202 10:58:05.664689 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:05 crc kubenswrapper[4925]: E0202 10:58:05.664797 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:05 crc kubenswrapper[4925]: E0202 10:58:05.664840 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.680672 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.680709 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.680720 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.680736 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.680746 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:05Z","lastTransitionTime":"2026-02-02T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.782656 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.782697 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.782707 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.782720 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.782729 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:05Z","lastTransitionTime":"2026-02-02T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.884960 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.884998 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.885007 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.885021 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.885030 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:05Z","lastTransitionTime":"2026-02-02T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.987804 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.987850 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.987865 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.987880 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:05 crc kubenswrapper[4925]: I0202 10:58:05.987891 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:05Z","lastTransitionTime":"2026-02-02T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.090845 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.090945 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.090964 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.090999 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.091018 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:06Z","lastTransitionTime":"2026-02-02T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.161931 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovnkube-controller/2.log" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.166466 4925 scope.go:117] "RemoveContainer" containerID="5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20" Feb 02 10:58:06 crc kubenswrapper[4925]: E0202 10:58:06.166744 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.181901 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.193266 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.193303 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.193315 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.193354 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.193369 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:06Z","lastTransitionTime":"2026-02-02T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.199978 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.212800 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.225893 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.235510 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.244826 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7cbcd3c-f8cf-4f96-98aa-014a785a8924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff7ae920a42d3188ef7bcd99aa3c4bd344f55fd90a9ae9b95411db6b6d30de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139493bf9b644468f00ef7346d25ede753332f6401fb46c8ea3d5118de8fbdaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6be1a1c8fa8650db2277393fecfd53a6d3dac682ec792eddf1aea329fcf56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.257758 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.272682 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.284846 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.296213 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.296256 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.296270 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.296291 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.296304 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:06Z","lastTransitionTime":"2026-02-02T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.298924 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.316173 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.330135 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.355893 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.368151 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.381288 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.392396 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.399057 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.399119 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.399138 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.399161 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.399178 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:06Z","lastTransitionTime":"2026-02-02T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.414352 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:04Z\\\",\\\"message\\\":\\\"e.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0065992f7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0202 10:58:04.643008 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:58:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.414832 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.430162 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.442348 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.454298 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.466287 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.477588 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.498827 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:04Z\\\",\\\"message\\\":\\\"e.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0065992f7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0202 10:58:04.643008 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:58:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.501626 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.501713 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.501734 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.501769 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.501793 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:06Z","lastTransitionTime":"2026-02-02T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.515890 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.540322 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.552486 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.565835 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.580730 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.599364 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.604273 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.604348 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.604368 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.604397 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.604419 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:06Z","lastTransitionTime":"2026-02-02T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.612858 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.629525 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.646815 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.656820 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 00:34:58.027575167 +0000 UTC Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.661219 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.674233 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.687131 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7cbcd3c-f8cf-4f96-98aa-014a785a8924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff7ae920a42d3188ef7bcd99aa3c4bd344f55fd90a9ae9b95411db6b6d30de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139493bf9b644468f00ef7346d25ede753332f6401fb46c8ea3d5118de8fbdaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6be1a1c8fa8650db2277393fecfd53a6d3dac682ec792eddf1aea329fcf56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.707560 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.707616 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.707628 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.707647 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.707658 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:06Z","lastTransitionTime":"2026-02-02T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.713373 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.812797 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.812895 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.812921 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.812945 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.812960 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:06Z","lastTransitionTime":"2026-02-02T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.915290 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.915360 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.915380 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.915411 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:06 crc kubenswrapper[4925]: I0202 10:58:06.915432 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:06Z","lastTransitionTime":"2026-02-02T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.018578 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.018636 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.018647 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.018665 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.018677 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:07Z","lastTransitionTime":"2026-02-02T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.122115 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.122176 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.122194 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.122222 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.122240 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:07Z","lastTransitionTime":"2026-02-02T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.225299 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.225359 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.225370 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.225389 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.225402 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:07Z","lastTransitionTime":"2026-02-02T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.327952 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.327994 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.328004 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.328018 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.328028 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:07Z","lastTransitionTime":"2026-02-02T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.430294 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.430332 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.430341 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.430355 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.430364 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:07Z","lastTransitionTime":"2026-02-02T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.533364 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.533417 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.533434 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.533457 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.533472 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:07Z","lastTransitionTime":"2026-02-02T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.636819 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.636856 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.636867 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.636883 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.636894 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:07Z","lastTransitionTime":"2026-02-02T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.656968 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:30:27.072234823 +0000 UTC Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.663429 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:07 crc kubenswrapper[4925]: E0202 10:58:07.663551 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.663619 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.663635 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.663615 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:07 crc kubenswrapper[4925]: E0202 10:58:07.663752 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:07 crc kubenswrapper[4925]: E0202 10:58:07.664108 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:07 crc kubenswrapper[4925]: E0202 10:58:07.664310 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.740325 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.740408 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.740429 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.740461 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.740483 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:07Z","lastTransitionTime":"2026-02-02T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.843225 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.843297 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.843316 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.843374 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.843415 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:07Z","lastTransitionTime":"2026-02-02T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.946137 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.946209 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.946236 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.946266 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:07 crc kubenswrapper[4925]: I0202 10:58:07.946288 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:07Z","lastTransitionTime":"2026-02-02T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.049281 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.049343 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.049361 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.049387 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.049410 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:08Z","lastTransitionTime":"2026-02-02T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.151997 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.152071 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.152137 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.152169 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.152197 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:08Z","lastTransitionTime":"2026-02-02T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.255512 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.255563 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.255579 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.255607 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.255623 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:08Z","lastTransitionTime":"2026-02-02T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.359135 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.359192 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.359210 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.359234 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.359254 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:08Z","lastTransitionTime":"2026-02-02T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.461709 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.461744 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.461782 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.461799 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.461810 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:08Z","lastTransitionTime":"2026-02-02T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.547721 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.547765 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.547778 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.547796 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.547807 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:08Z","lastTransitionTime":"2026-02-02T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:08 crc kubenswrapper[4925]: E0202 10:58:08.563107 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.567244 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.567292 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.567310 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.567329 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.567343 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:08Z","lastTransitionTime":"2026-02-02T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:08 crc kubenswrapper[4925]: E0202 10:58:08.585120 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.589125 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.589171 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.589185 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.589204 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.589219 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:08Z","lastTransitionTime":"2026-02-02T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:08 crc kubenswrapper[4925]: E0202 10:58:08.601852 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.605440 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.605484 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.605496 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.605514 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.605525 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:08Z","lastTransitionTime":"2026-02-02T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:08 crc kubenswrapper[4925]: E0202 10:58:08.618278 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.622260 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.622289 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.622297 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.622309 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.622318 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:08Z","lastTransitionTime":"2026-02-02T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:08 crc kubenswrapper[4925]: E0202 10:58:08.634326 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:08 crc kubenswrapper[4925]: E0202 10:58:08.634476 4925 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.635975 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.636005 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.636013 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.636023 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.636033 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:08Z","lastTransitionTime":"2026-02-02T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.657626 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:00:04.358634002 +0000 UTC Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.738215 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.738248 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.738260 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.738274 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.738287 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:08Z","lastTransitionTime":"2026-02-02T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.841352 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.841404 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.841421 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.841441 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.841453 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:08Z","lastTransitionTime":"2026-02-02T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.943793 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.943833 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.943842 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.943856 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:08 crc kubenswrapper[4925]: I0202 10:58:08.943866 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:08Z","lastTransitionTime":"2026-02-02T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.046755 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.046795 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.046807 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.046823 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.046836 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:09Z","lastTransitionTime":"2026-02-02T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.149206 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.149239 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.149250 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.149265 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.149279 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:09Z","lastTransitionTime":"2026-02-02T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.251233 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.251320 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.251329 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.251342 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.251351 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:09Z","lastTransitionTime":"2026-02-02T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.353643 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.353677 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.353688 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.353704 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.353715 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:09Z","lastTransitionTime":"2026-02-02T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.456678 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.456754 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.456773 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.456795 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.456809 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:09Z","lastTransitionTime":"2026-02-02T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.559738 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.559774 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.559786 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.559801 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.559811 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:09Z","lastTransitionTime":"2026-02-02T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.658303 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 10:35:57.525962825 +0000 UTC Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.662389 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.662453 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.662476 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.662506 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.662528 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:09Z","lastTransitionTime":"2026-02-02T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.663625 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.663666 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.663640 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.663623 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:09 crc kubenswrapper[4925]: E0202 10:58:09.663745 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:09 crc kubenswrapper[4925]: E0202 10:58:09.663932 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:09 crc kubenswrapper[4925]: E0202 10:58:09.663958 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:09 crc kubenswrapper[4925]: E0202 10:58:09.664152 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.765698 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.765771 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.765789 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.766270 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.766333 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:09Z","lastTransitionTime":"2026-02-02T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.869064 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.869180 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.869198 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.869221 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.869238 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:09Z","lastTransitionTime":"2026-02-02T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.971611 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.971658 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.971672 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.971691 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:09 crc kubenswrapper[4925]: I0202 10:58:09.971702 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:09Z","lastTransitionTime":"2026-02-02T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.074308 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.074383 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.074419 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.074442 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.074459 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:10Z","lastTransitionTime":"2026-02-02T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.176535 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.176569 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.176578 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.176592 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.176601 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:10Z","lastTransitionTime":"2026-02-02T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.279011 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.279105 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.279130 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.279159 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.279179 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:10Z","lastTransitionTime":"2026-02-02T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.381635 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.381701 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.381723 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.381752 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.381773 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:10Z","lastTransitionTime":"2026-02-02T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.484955 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.485031 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.485060 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.485146 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.485168 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:10Z","lastTransitionTime":"2026-02-02T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.587334 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.587399 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.587414 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.587431 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.587443 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:10Z","lastTransitionTime":"2026-02-02T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.659461 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:23:49.222192429 +0000 UTC Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.689882 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.689921 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.689932 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.689955 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.689968 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:10Z","lastTransitionTime":"2026-02-02T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.791860 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.791917 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.791929 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.791945 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.791975 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:10Z","lastTransitionTime":"2026-02-02T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.893873 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.893911 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.893921 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.893936 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.893947 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:10Z","lastTransitionTime":"2026-02-02T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.995701 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.995738 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.995749 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.995763 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:10 crc kubenswrapper[4925]: I0202 10:58:10.995774 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:10Z","lastTransitionTime":"2026-02-02T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.098257 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.098293 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.098302 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.098317 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.098327 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:11Z","lastTransitionTime":"2026-02-02T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.200877 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.200942 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.200963 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.200990 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.201011 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:11Z","lastTransitionTime":"2026-02-02T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.303010 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.303045 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.303056 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.303089 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.303103 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:11Z","lastTransitionTime":"2026-02-02T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.405462 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.405505 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.405514 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.405529 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.405539 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:11Z","lastTransitionTime":"2026-02-02T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.508005 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.508046 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.508062 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.508090 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.508100 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:11Z","lastTransitionTime":"2026-02-02T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.610053 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.610112 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.610122 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.610136 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.610145 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:11Z","lastTransitionTime":"2026-02-02T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.660467 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 11:11:25.65919475 +0000 UTC Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.663834 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.663878 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.663901 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.663852 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:11 crc kubenswrapper[4925]: E0202 10:58:11.663995 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:11 crc kubenswrapper[4925]: E0202 10:58:11.664178 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:11 crc kubenswrapper[4925]: E0202 10:58:11.664318 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:11 crc kubenswrapper[4925]: E0202 10:58:11.664410 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.712525 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.712569 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.712581 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.712599 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.712611 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:11Z","lastTransitionTime":"2026-02-02T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.815162 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.815452 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.815472 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.815513 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.815529 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:11Z","lastTransitionTime":"2026-02-02T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.917733 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.917771 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.917781 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.917809 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:11 crc kubenswrapper[4925]: I0202 10:58:11.917830 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:11Z","lastTransitionTime":"2026-02-02T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.019954 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.019999 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.020011 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.020027 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.020038 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:12Z","lastTransitionTime":"2026-02-02T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.122890 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.122946 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.122962 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.122986 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.123005 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:12Z","lastTransitionTime":"2026-02-02T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.225706 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.225739 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.225750 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.225765 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.225775 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:12Z","lastTransitionTime":"2026-02-02T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.328821 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.328863 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.328873 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.328888 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.328898 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:12Z","lastTransitionTime":"2026-02-02T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.431777 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.431835 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.431853 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.431914 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.431945 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:12Z","lastTransitionTime":"2026-02-02T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.535093 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.535127 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.535138 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.535152 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.535160 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:12Z","lastTransitionTime":"2026-02-02T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.637906 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.637934 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.637942 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.637956 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.637964 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:12Z","lastTransitionTime":"2026-02-02T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.660983 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 07:40:47.30554236 +0000 UTC Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.740273 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.740323 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.740333 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.740347 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.740357 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:12Z","lastTransitionTime":"2026-02-02T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.843215 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.843278 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.843298 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.843321 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.843338 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:12Z","lastTransitionTime":"2026-02-02T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.946154 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.946194 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.946208 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.946224 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:12 crc kubenswrapper[4925]: I0202 10:58:12.946236 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:12Z","lastTransitionTime":"2026-02-02T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.049161 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.049193 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.049203 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.049215 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.049224 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:13Z","lastTransitionTime":"2026-02-02T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.151198 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.151260 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.151282 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.151305 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.151321 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:13Z","lastTransitionTime":"2026-02-02T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.253320 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.253355 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.253366 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.253379 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.253390 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:13Z","lastTransitionTime":"2026-02-02T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.355903 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.355949 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.355961 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.355976 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.355986 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:13Z","lastTransitionTime":"2026-02-02T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.458955 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.458989 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.458998 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.459010 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.459019 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:13Z","lastTransitionTime":"2026-02-02T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.561178 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.561224 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.561238 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.561259 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.561277 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:13Z","lastTransitionTime":"2026-02-02T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.661483 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 22:18:12.443211873 +0000 UTC Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.662827 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.662863 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.662874 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.662887 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.662896 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:13Z","lastTransitionTime":"2026-02-02T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.663752 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.663798 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.663807 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.663754 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:13 crc kubenswrapper[4925]: E0202 10:58:13.663884 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:13 crc kubenswrapper[4925]: E0202 10:58:13.663832 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:13 crc kubenswrapper[4925]: E0202 10:58:13.663975 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:13 crc kubenswrapper[4925]: E0202 10:58:13.664038 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.765372 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.765412 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.765423 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.765438 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.765447 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:13Z","lastTransitionTime":"2026-02-02T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.867766 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.867806 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.867815 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.867828 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.867839 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:13Z","lastTransitionTime":"2026-02-02T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.970321 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.970389 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.970408 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.970431 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:13 crc kubenswrapper[4925]: I0202 10:58:13.970449 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:13Z","lastTransitionTime":"2026-02-02T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.073209 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.073256 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.073271 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.073288 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.073301 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:14Z","lastTransitionTime":"2026-02-02T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.175520 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.175569 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.175587 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.175610 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.175629 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:14Z","lastTransitionTime":"2026-02-02T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.278291 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.278333 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.278343 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.278361 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.278373 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:14Z","lastTransitionTime":"2026-02-02T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.381153 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.381200 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.381217 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.381234 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.381245 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:14Z","lastTransitionTime":"2026-02-02T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.483465 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.483506 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.483516 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.483530 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.483541 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:14Z","lastTransitionTime":"2026-02-02T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.587648 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.587694 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.587708 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.587730 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.587742 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:14Z","lastTransitionTime":"2026-02-02T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.662055 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:17:56.857361151 +0000 UTC Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.676579 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.687880 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.689402 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.689428 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.689437 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.689450 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.689459 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:14Z","lastTransitionTime":"2026-02-02T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.699718 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.710049 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.720718 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.730001 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7cbcd3c-f8cf-4f96-98aa-014a785a8924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff7ae920a42d3188ef7bcd99aa3c4bd344f55fd90a9ae9b95411db6b6d30de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139493bf9b644468f00ef7346d25ede753332f6401fb46c8ea3d5118de8fbdaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6be1a1c8fa8650db2277393fecfd53a6d3dac682ec792eddf1aea329fcf56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.738385 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.753338 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.763758 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.773141 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.780749 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.791740 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.791778 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.791787 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.791802 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.791813 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:14Z","lastTransitionTime":"2026-02-02T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.796613 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:04Z\\\",\\\"message\\\":\\\"e.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0065992f7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0202 10:58:04.643008 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:58:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.808662 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.818160 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.826022 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.835506 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.843671 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.854302 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.893891 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.893940 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.893958 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.893981 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.893998 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:14Z","lastTransitionTime":"2026-02-02T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.996489 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.996541 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.996560 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.996582 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:14 crc kubenswrapper[4925]: I0202 10:58:14.996598 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:14Z","lastTransitionTime":"2026-02-02T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.098936 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.098971 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.098980 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.098994 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.099003 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:15Z","lastTransitionTime":"2026-02-02T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.200687 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.200735 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.200747 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.200763 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.200776 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:15Z","lastTransitionTime":"2026-02-02T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.303119 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.303160 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.303169 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.303186 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.303195 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:15Z","lastTransitionTime":"2026-02-02T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.405490 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.406024 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.406128 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.406216 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.406293 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:15Z","lastTransitionTime":"2026-02-02T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.508824 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.508865 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.508878 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.508894 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.508905 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:15Z","lastTransitionTime":"2026-02-02T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.611399 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.611437 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.611450 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.611465 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.611476 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:15Z","lastTransitionTime":"2026-02-02T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.662886 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:31:35.921227316 +0000 UTC Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.663595 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:15 crc kubenswrapper[4925]: E0202 10:58:15.663776 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.664255 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:15 crc kubenswrapper[4925]: E0202 10:58:15.664337 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.664371 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:15 crc kubenswrapper[4925]: E0202 10:58:15.664452 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.664540 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:15 crc kubenswrapper[4925]: E0202 10:58:15.664676 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.713386 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.713552 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.713639 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.713723 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.713808 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:15Z","lastTransitionTime":"2026-02-02T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.815971 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.816237 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.816305 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.816374 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.816436 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:15Z","lastTransitionTime":"2026-02-02T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.918247 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.918299 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.918309 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.918320 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:15 crc kubenswrapper[4925]: I0202 10:58:15.918329 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:15Z","lastTransitionTime":"2026-02-02T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.020909 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.021206 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.021290 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.021363 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.021423 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:16Z","lastTransitionTime":"2026-02-02T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.123508 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.123565 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.123582 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.123661 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.123682 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:16Z","lastTransitionTime":"2026-02-02T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.226610 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.226651 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.226660 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.226672 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.226681 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:16Z","lastTransitionTime":"2026-02-02T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.328851 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.328899 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.328909 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.328923 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.328932 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:16Z","lastTransitionTime":"2026-02-02T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.432144 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.432197 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.432215 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.432237 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.432253 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:16Z","lastTransitionTime":"2026-02-02T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.534461 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.534541 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.534562 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.534598 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.534624 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:16Z","lastTransitionTime":"2026-02-02T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.636647 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.636686 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.636697 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.636714 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.636724 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:16Z","lastTransitionTime":"2026-02-02T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.663678 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 23:45:51.675837219 +0000 UTC Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.739356 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.739391 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.739400 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.739414 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.739422 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:16Z","lastTransitionTime":"2026-02-02T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.841232 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.841254 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.841262 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.841275 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.841284 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:16Z","lastTransitionTime":"2026-02-02T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.942992 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.943030 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.943039 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.943054 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:16 crc kubenswrapper[4925]: I0202 10:58:16.943063 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:16Z","lastTransitionTime":"2026-02-02T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.045513 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.045551 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.045562 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.045578 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.045589 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:17Z","lastTransitionTime":"2026-02-02T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.148106 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.148160 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.148173 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.148190 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.148200 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:17Z","lastTransitionTime":"2026-02-02T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.251101 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.251136 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.251146 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.251161 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.251171 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:17Z","lastTransitionTime":"2026-02-02T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.353989 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.354031 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.354040 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.354053 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.354062 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:17Z","lastTransitionTime":"2026-02-02T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.456059 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.456105 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.456116 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.456129 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.456137 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:17Z","lastTransitionTime":"2026-02-02T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.559016 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.559127 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.559153 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.559181 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.559204 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:17Z","lastTransitionTime":"2026-02-02T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.661588 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.661623 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.661633 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.661646 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.661655 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:17Z","lastTransitionTime":"2026-02-02T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.663927 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 02:56:39.345157871 +0000 UTC Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.664044 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.664142 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:17 crc kubenswrapper[4925]: E0202 10:58:17.664176 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.664218 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.664255 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:17 crc kubenswrapper[4925]: E0202 10:58:17.664341 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:17 crc kubenswrapper[4925]: E0202 10:58:17.664390 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:17 crc kubenswrapper[4925]: E0202 10:58:17.664458 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.764166 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.764407 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.764416 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.764458 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.764466 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:17Z","lastTransitionTime":"2026-02-02T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.867439 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.867478 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.867488 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.867501 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.867511 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:17Z","lastTransitionTime":"2026-02-02T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.972319 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.972366 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.972376 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.972392 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:17 crc kubenswrapper[4925]: I0202 10:58:17.972401 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:17Z","lastTransitionTime":"2026-02-02T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.074611 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.074655 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.074666 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.074684 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.074697 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:18Z","lastTransitionTime":"2026-02-02T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.125256 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs\") pod \"network-metrics-daemon-hjf4s\" (UID: \"39f183d5-0612-452e-b762-c841df3a306d\") " pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:18 crc kubenswrapper[4925]: E0202 10:58:18.125561 4925 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:58:18 crc kubenswrapper[4925]: E0202 10:58:18.125710 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs podName:39f183d5-0612-452e-b762-c841df3a306d nodeName:}" failed. No retries permitted until 2026-02-02 10:58:50.125663916 +0000 UTC m=+107.129912918 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs") pod "network-metrics-daemon-hjf4s" (UID: "39f183d5-0612-452e-b762-c841df3a306d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.176619 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.176681 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.176692 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.176707 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.176717 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:18Z","lastTransitionTime":"2026-02-02T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.280026 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.280071 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.280118 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.280141 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.280158 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:18Z","lastTransitionTime":"2026-02-02T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.382448 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.382488 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.382498 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.382514 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.382525 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:18Z","lastTransitionTime":"2026-02-02T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.485518 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.485565 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.485575 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.485594 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.485603 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:18Z","lastTransitionTime":"2026-02-02T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.588612 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.588669 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.588687 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.588712 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.588730 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:18Z","lastTransitionTime":"2026-02-02T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.664703 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:50:36.504195858 +0000 UTC Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.665301 4925 scope.go:117] "RemoveContainer" containerID="5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20" Feb 02 10:58:18 crc kubenswrapper[4925]: E0202 10:58:18.665652 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.691006 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.691062 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.691128 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.691159 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.691181 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:18Z","lastTransitionTime":"2026-02-02T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.794226 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.794303 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.794336 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.794367 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.794387 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:18Z","lastTransitionTime":"2026-02-02T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.896959 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.897002 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.897012 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.897025 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.897034 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:18Z","lastTransitionTime":"2026-02-02T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.951951 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.952130 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.952157 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.952187 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.952214 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:18Z","lastTransitionTime":"2026-02-02T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:18 crc kubenswrapper[4925]: E0202 10:58:18.968701 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:18Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.973047 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.973151 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.973176 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.973210 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.973243 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:18Z","lastTransitionTime":"2026-02-02T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:18 crc kubenswrapper[4925]: E0202 10:58:18.990484 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:18Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.993970 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.994021 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.994036 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.994055 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:18 crc kubenswrapper[4925]: I0202 10:58:18.994069 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:18Z","lastTransitionTime":"2026-02-02T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:19 crc kubenswrapper[4925]: E0202 10:58:19.010250 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.013634 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.013716 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.013738 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.013834 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.013914 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:19Z","lastTransitionTime":"2026-02-02T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:19 crc kubenswrapper[4925]: E0202 10:58:19.026329 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.030312 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.030347 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.030357 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.030371 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.030383 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:19Z","lastTransitionTime":"2026-02-02T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:19 crc kubenswrapper[4925]: E0202 10:58:19.046850 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:19 crc kubenswrapper[4925]: E0202 10:58:19.046970 4925 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.048452 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.048483 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.048493 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.048507 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.048518 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:19Z","lastTransitionTime":"2026-02-02T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.151661 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.151706 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.151719 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.151736 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.151748 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:19Z","lastTransitionTime":"2026-02-02T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.253800 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.253859 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.253874 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.253893 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.253905 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:19Z","lastTransitionTime":"2026-02-02T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.356377 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.356418 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.356433 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.356448 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.356460 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:19Z","lastTransitionTime":"2026-02-02T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.458804 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.458837 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.458847 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.458860 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.458872 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:19Z","lastTransitionTime":"2026-02-02T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.561928 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.561993 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.562011 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.562036 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.562056 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:19Z","lastTransitionTime":"2026-02-02T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.663393 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.663416 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.663473 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.663649 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.664191 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.664219 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.664231 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.664248 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.664259 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:19Z","lastTransitionTime":"2026-02-02T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:19 crc kubenswrapper[4925]: E0202 10:58:19.664440 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:19 crc kubenswrapper[4925]: E0202 10:58:19.664589 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:19 crc kubenswrapper[4925]: E0202 10:58:19.664707 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.664992 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 03:42:56.989691624 +0000 UTC Feb 02 10:58:19 crc kubenswrapper[4925]: E0202 10:58:19.663959 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.767332 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.767724 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.767907 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.768060 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.768243 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:19Z","lastTransitionTime":"2026-02-02T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.871245 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.871587 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.871729 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.871862 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.871993 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:19Z","lastTransitionTime":"2026-02-02T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.974875 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.975278 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.975460 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.975606 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:19 crc kubenswrapper[4925]: I0202 10:58:19.975731 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:19Z","lastTransitionTime":"2026-02-02T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.079256 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.079327 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.079344 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.079370 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.079387 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:20Z","lastTransitionTime":"2026-02-02T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.183926 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.184314 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.184412 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.184486 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.184552 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:20Z","lastTransitionTime":"2026-02-02T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.288058 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.288145 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.288202 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.288227 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.288244 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:20Z","lastTransitionTime":"2026-02-02T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.391267 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.391323 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.391341 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.391364 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.391382 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:20Z","lastTransitionTime":"2026-02-02T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.494218 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.494265 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.494293 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.494307 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.494316 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:20Z","lastTransitionTime":"2026-02-02T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.598787 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.598846 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.598863 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.598887 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.598905 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:20Z","lastTransitionTime":"2026-02-02T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.665978 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 11:11:14.133993308 +0000 UTC Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.702431 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.702491 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.702513 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.702539 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.702563 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:20Z","lastTransitionTime":"2026-02-02T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.807044 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.807263 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.807294 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.807320 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.807338 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:20Z","lastTransitionTime":"2026-02-02T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.911025 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.911421 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.911618 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.911978 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:20 crc kubenswrapper[4925]: I0202 10:58:20.912195 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:20Z","lastTransitionTime":"2026-02-02T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.016146 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.016482 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.016619 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.019703 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.019879 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:21Z","lastTransitionTime":"2026-02-02T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.122576 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.122994 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.123224 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.123384 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.123511 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:21Z","lastTransitionTime":"2026-02-02T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.226570 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.226658 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.226682 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.226712 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.226734 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:21Z","lastTransitionTime":"2026-02-02T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.330354 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.330437 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.330461 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.330489 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.330512 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:21Z","lastTransitionTime":"2026-02-02T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.434532 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.434600 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.434624 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.434655 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.434677 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:21Z","lastTransitionTime":"2026-02-02T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.537547 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.537636 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.537655 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.537679 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.537698 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:21Z","lastTransitionTime":"2026-02-02T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.640993 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.641139 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.641160 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.641183 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.641200 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:21Z","lastTransitionTime":"2026-02-02T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.664026 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.664144 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.664068 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.664028 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:21 crc kubenswrapper[4925]: E0202 10:58:21.664319 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:21 crc kubenswrapper[4925]: E0202 10:58:21.664489 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:21 crc kubenswrapper[4925]: E0202 10:58:21.664583 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:21 crc kubenswrapper[4925]: E0202 10:58:21.664727 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.666377 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 12:58:06.364216788 +0000 UTC Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.744572 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.744654 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.744674 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.744700 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.744718 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:21Z","lastTransitionTime":"2026-02-02T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.847540 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.847630 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.847671 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.847704 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.847726 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:21Z","lastTransitionTime":"2026-02-02T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.951410 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.951478 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.951499 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.951523 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:21 crc kubenswrapper[4925]: I0202 10:58:21.951540 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:21Z","lastTransitionTime":"2026-02-02T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.054403 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.054468 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.054487 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.054512 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.054530 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:22Z","lastTransitionTime":"2026-02-02T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.157338 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.157382 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.157394 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.157413 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.157438 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:22Z","lastTransitionTime":"2026-02-02T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.261658 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.262539 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.262622 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.262639 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.262651 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:22Z","lastTransitionTime":"2026-02-02T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.365425 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.365492 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.365511 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.365535 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.365553 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:22Z","lastTransitionTime":"2026-02-02T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.469054 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.469162 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.469175 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.469192 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.469206 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:22Z","lastTransitionTime":"2026-02-02T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.572929 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.572975 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.572987 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.573004 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.573017 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:22Z","lastTransitionTime":"2026-02-02T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.666564 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:55:23.039220506 +0000 UTC Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.675008 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.675107 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.675123 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.675165 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.675181 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:22Z","lastTransitionTime":"2026-02-02T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.778348 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.778400 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.778411 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.778425 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.778450 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:22Z","lastTransitionTime":"2026-02-02T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.881364 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.881436 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.881453 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.881477 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.881495 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:22Z","lastTransitionTime":"2026-02-02T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.984251 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.984327 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.984347 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.984366 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:22 crc kubenswrapper[4925]: I0202 10:58:22.984382 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:22Z","lastTransitionTime":"2026-02-02T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.087764 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.087851 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.087869 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.087886 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.087898 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:23Z","lastTransitionTime":"2026-02-02T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.191347 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.191387 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.191400 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.191417 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.191428 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:23Z","lastTransitionTime":"2026-02-02T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.220938 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q4rr9_b84c6881-f719-456f-9135-7dfb7688a48d/kube-multus/0.log" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.220994 4925 generic.go:334] "Generic (PLEG): container finished" podID="b84c6881-f719-456f-9135-7dfb7688a48d" containerID="3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e" exitCode=1 Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.221026 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q4rr9" event={"ID":"b84c6881-f719-456f-9135-7dfb7688a48d","Type":"ContainerDied","Data":"3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e"} Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.221463 4925 scope.go:117] "RemoveContainer" containerID="3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.261886 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.276363 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.294375 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.294414 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.294424 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.294439 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.294449 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:23Z","lastTransitionTime":"2026-02-02T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.296467 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.314451 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:23Z\\\",\\\"message\\\":\\\"2026-02-02T10:57:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e3d0224f-9b44-4a49-842d-033f4321b2b6\\\\n2026-02-02T10:57:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e3d0224f-9b44-4a49-842d-033f4321b2b6 to /host/opt/cni/bin/\\\\n2026-02-02T10:57:38Z [verbose] multus-daemon started\\\\n2026-02-02T10:57:38Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:58:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.327226 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.342434 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.355920 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.372098 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.389367 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7cbcd3c-f8cf-4f96-98aa-014a785a8924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff7ae920a42d3188ef7bcd99aa3c4bd344f55fd90a9ae9b95411db6b6d30de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139493bf9b644468f00ef7346d25ede753332f6401fb46c8ea3d5118de8fbdaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6be1a1c8fa8650db2277393fecfd53a6d3dac682ec792eddf1aea329fcf56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.397164 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.397226 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.397250 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.397281 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.397303 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:23Z","lastTransitionTime":"2026-02-02T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.409955 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.431696 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.452496 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.470355 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.487340 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.500153 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.500181 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.500193 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.500209 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.500221 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:23Z","lastTransitionTime":"2026-02-02T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.517255 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:04Z\\\",\\\"message\\\":\\\"e.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0065992f7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0202 10:58:04.643008 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:58:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.540265 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.584204 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.611943 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.612006 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.612023 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.612050 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.612062 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:23Z","lastTransitionTime":"2026-02-02T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.626382 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.664147 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.664345 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:23 crc kubenswrapper[4925]: E0202 10:58:23.664386 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.664409 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.664451 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:23 crc kubenswrapper[4925]: E0202 10:58:23.664643 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:23 crc kubenswrapper[4925]: E0202 10:58:23.664722 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:23 crc kubenswrapper[4925]: E0202 10:58:23.664839 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.667168 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 10:10:08.054915946 +0000 UTC Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.715069 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.715131 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.715143 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.715158 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.715169 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:23Z","lastTransitionTime":"2026-02-02T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.818849 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.818917 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.818934 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.818958 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.818975 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:23Z","lastTransitionTime":"2026-02-02T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.921742 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.921804 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.921821 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.921846 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:23 crc kubenswrapper[4925]: I0202 10:58:23.921873 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:23Z","lastTransitionTime":"2026-02-02T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.025415 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.025488 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.025511 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.025543 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.025567 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:24Z","lastTransitionTime":"2026-02-02T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.128731 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.128780 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.128796 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.128815 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.128828 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:24Z","lastTransitionTime":"2026-02-02T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.228113 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q4rr9_b84c6881-f719-456f-9135-7dfb7688a48d/kube-multus/0.log" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.228216 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q4rr9" event={"ID":"b84c6881-f719-456f-9135-7dfb7688a48d","Type":"ContainerStarted","Data":"fdc3f2b2681e98e48a3d3a5a2c79702766436ccef4ef7cd49600a53b58ca6032"} Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.231161 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.231238 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.231262 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.231296 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.231321 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:24Z","lastTransitionTime":"2026-02-02T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.249885 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.282967 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.304950 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.324012 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.334857 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.334910 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.334928 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.334950 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.334967 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:24Z","lastTransitionTime":"2026-02-02T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.341436 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.377306 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:04Z\\\",\\\"message\\\":\\\"e.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0065992f7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0202 10:58:04.643008 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:58:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.396782 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.411840 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdc3f2b2681e98e48a3d3a5a2c79702766436ccef4ef7cd49600a53b58ca6032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:23Z\\\",\\\"message\\\":\\\"2026-02-02T10:57:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e3d0224f-9b44-4a49-842d-033f4321b2b6\\\\n2026-02-02T10:57:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e3d0224f-9b44-4a49-842d-033f4321b2b6 to /host/opt/cni/bin/\\\\n2026-02-02T10:57:38Z [verbose] multus-daemon started\\\\n2026-02-02T10:57:38Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:58:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.422310 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.440036 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.452069 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.455929 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.455965 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.455977 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.455993 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.456003 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:24Z","lastTransitionTime":"2026-02-02T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.466155 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7cbcd3c-f8cf-4f96-98aa-014a785a8924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff7ae920a42d3188ef7bcd99aa3c4bd344f55fd90a9ae9b95411db6b6d30de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139493bf9b644468f00ef7346d25ede753332f6401fb46c8ea3d5118de8fbdaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6be1a1c8fa8650db2277393fecfd53a6d3dac682ec792eddf1aea329fcf56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.482767 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.494581 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.504918 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.516718 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.526467 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.539698 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.558367 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.558420 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.558434 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.558454 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.558465 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:24Z","lastTransitionTime":"2026-02-02T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.661519 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.661567 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.661579 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.661594 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.661604 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:24Z","lastTransitionTime":"2026-02-02T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.667356 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:17:43.669090645 +0000 UTC Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.675447 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.689339 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.705443 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.717483 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.730171 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.762341 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:04Z\\\",\\\"message\\\":\\\"e.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0065992f7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0202 10:58:04.643008 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:58:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.768198 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.768263 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.768292 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.768319 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.768337 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:24Z","lastTransitionTime":"2026-02-02T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.791824 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.811285 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.827689 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdc3f2b2681e98e48a3d3a5a2c79702766436ccef4ef7cd49600a53b58ca6032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:23Z\\\",\\\"message\\\":\\\"2026-02-02T10:57:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e3d0224f-9b44-4a49-842d-033f4321b2b6\\\\n2026-02-02T10:57:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e3d0224f-9b44-4a49-842d-033f4321b2b6 to /host/opt/cni/bin/\\\\n2026-02-02T10:57:38Z [verbose] multus-daemon started\\\\n2026-02-02T10:57:38Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:58:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.840755 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.854347 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.868767 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.871694 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.871816 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.871898 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.871984 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.872016 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:24Z","lastTransitionTime":"2026-02-02T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.885021 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7cbcd3c-f8cf-4f96-98aa-014a785a8924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff7ae920a42d3188ef7bcd99aa3c4bd344f55fd90a9ae9b95411db6b6d30de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139493bf9b644468f00ef7346d25ede753332f6401fb46c8ea3d5118de8fbdaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6be1a1c8fa8650db2277393fecfd53a6d3dac682ec792eddf1aea329fcf56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.913267 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.924883 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.936852 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.946504 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.955847 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.966980 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.974484 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.974516 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.974529 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.974545 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:24 crc kubenswrapper[4925]: I0202 10:58:24.974556 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:24Z","lastTransitionTime":"2026-02-02T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.077466 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.077525 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.077547 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.077573 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.077593 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:25Z","lastTransitionTime":"2026-02-02T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.180240 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.180287 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.180297 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.180313 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.180324 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:25Z","lastTransitionTime":"2026-02-02T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.284111 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.284147 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.284158 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.284172 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.284181 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:25Z","lastTransitionTime":"2026-02-02T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.386111 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.386149 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.386161 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.386177 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.386188 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:25Z","lastTransitionTime":"2026-02-02T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.489184 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.489256 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.489279 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.489308 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.489332 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:25Z","lastTransitionTime":"2026-02-02T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.592197 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.592279 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.592307 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.592338 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.592363 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:25Z","lastTransitionTime":"2026-02-02T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.664182 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.664213 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.664315 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.664389 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:25 crc kubenswrapper[4925]: E0202 10:58:25.664337 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:25 crc kubenswrapper[4925]: E0202 10:58:25.664557 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:25 crc kubenswrapper[4925]: E0202 10:58:25.665005 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:25 crc kubenswrapper[4925]: E0202 10:58:25.665378 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.667446 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:02:59.787116652 +0000 UTC Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.696818 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.696869 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.696888 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.696911 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.696930 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:25Z","lastTransitionTime":"2026-02-02T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.800599 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.800666 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.800689 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.800840 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.800863 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:25Z","lastTransitionTime":"2026-02-02T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.903469 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.903517 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.903528 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.903544 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:25 crc kubenswrapper[4925]: I0202 10:58:25.903555 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:25Z","lastTransitionTime":"2026-02-02T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.006496 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.006541 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.006553 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.006570 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.006583 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:26Z","lastTransitionTime":"2026-02-02T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.109144 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.109207 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.109226 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.109251 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.109269 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:26Z","lastTransitionTime":"2026-02-02T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.212983 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.213045 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.213063 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.213127 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.213197 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:26Z","lastTransitionTime":"2026-02-02T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.318251 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.318327 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.318348 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.318380 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.318403 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:26Z","lastTransitionTime":"2026-02-02T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.422043 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.422181 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.422223 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.422256 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.422278 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:26Z","lastTransitionTime":"2026-02-02T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.525026 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.525119 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.525144 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.525172 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.525196 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:26Z","lastTransitionTime":"2026-02-02T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.628761 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.628914 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.628939 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.628968 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.628987 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:26Z","lastTransitionTime":"2026-02-02T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.668192 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:50:01.011898822 +0000 UTC Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.730934 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.730992 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.731011 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.731033 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.731049 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:26Z","lastTransitionTime":"2026-02-02T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.834733 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.834789 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.834800 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.834820 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.834830 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:26Z","lastTransitionTime":"2026-02-02T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.938119 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.938205 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.938235 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.938266 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:26 crc kubenswrapper[4925]: I0202 10:58:26.938287 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:26Z","lastTransitionTime":"2026-02-02T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.040220 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.040257 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.040269 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.040282 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.040291 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:27Z","lastTransitionTime":"2026-02-02T10:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.143147 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.143193 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.143206 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.143223 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.143268 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:27Z","lastTransitionTime":"2026-02-02T10:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.246178 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.246239 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.246260 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.246310 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.246332 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:27Z","lastTransitionTime":"2026-02-02T10:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.349341 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.349386 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.349397 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.349412 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.349424 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:27Z","lastTransitionTime":"2026-02-02T10:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.451972 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.452015 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.452026 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.452041 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.452052 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:27Z","lastTransitionTime":"2026-02-02T10:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.554970 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.555256 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.555366 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.555525 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.555632 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:27Z","lastTransitionTime":"2026-02-02T10:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.658568 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.658627 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.658646 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.658667 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.658683 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:27Z","lastTransitionTime":"2026-02-02T10:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.664196 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.664223 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.664287 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:27 crc kubenswrapper[4925]: E0202 10:58:27.664405 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.664518 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:27 crc kubenswrapper[4925]: E0202 10:58:27.664616 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:27 crc kubenswrapper[4925]: E0202 10:58:27.664745 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:27 crc kubenswrapper[4925]: E0202 10:58:27.664920 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.669247 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 09:32:57.620671989 +0000 UTC Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.761459 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.761522 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.761538 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.761562 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.761578 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:27Z","lastTransitionTime":"2026-02-02T10:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.865029 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.865094 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.865105 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.865122 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.865134 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:27Z","lastTransitionTime":"2026-02-02T10:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.968244 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.968315 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.968336 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.968361 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:27 crc kubenswrapper[4925]: I0202 10:58:27.968379 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:27Z","lastTransitionTime":"2026-02-02T10:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.070794 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.070859 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.070876 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.070902 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.070920 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:28Z","lastTransitionTime":"2026-02-02T10:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.173487 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.173549 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.173559 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.173570 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.173578 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:28Z","lastTransitionTime":"2026-02-02T10:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.276764 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.276855 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.276877 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.276905 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.276926 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:28Z","lastTransitionTime":"2026-02-02T10:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.380407 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.380496 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.380548 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.380580 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.380603 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:28Z","lastTransitionTime":"2026-02-02T10:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.483782 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.483857 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.483881 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.483912 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.483936 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:28Z","lastTransitionTime":"2026-02-02T10:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.587355 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.587425 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.587442 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.587467 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.587484 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:28Z","lastTransitionTime":"2026-02-02T10:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.669491 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 15:52:36.789806662 +0000 UTC Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.691005 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.691067 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.691212 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.691243 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.691268 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:28Z","lastTransitionTime":"2026-02-02T10:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.794521 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.794603 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.794623 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.794647 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.794666 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:28Z","lastTransitionTime":"2026-02-02T10:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.898433 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.898498 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.898511 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.898552 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:28 crc kubenswrapper[4925]: I0202 10:58:28.898566 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:28Z","lastTransitionTime":"2026-02-02T10:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.001787 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.001865 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.001900 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.001930 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.001951 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:29Z","lastTransitionTime":"2026-02-02T10:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.106157 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.106224 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.106246 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.106277 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.106296 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:29Z","lastTransitionTime":"2026-02-02T10:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.209581 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.209628 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.209640 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.209657 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.209669 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:29Z","lastTransitionTime":"2026-02-02T10:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.312680 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.312734 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.312751 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.312774 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.312791 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:29Z","lastTransitionTime":"2026-02-02T10:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.365055 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.365156 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.365180 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.365205 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.365226 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:29Z","lastTransitionTime":"2026-02-02T10:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:29 crc kubenswrapper[4925]: E0202 10:58:29.388253 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.394521 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.394572 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.394629 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.394652 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.394670 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:29Z","lastTransitionTime":"2026-02-02T10:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:29 crc kubenswrapper[4925]: E0202 10:58:29.415281 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.420697 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.420757 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.420776 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.420801 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.420818 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:29Z","lastTransitionTime":"2026-02-02T10:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:29 crc kubenswrapper[4925]: E0202 10:58:29.445632 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.451127 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.451178 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.451196 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.451220 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.451237 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:29Z","lastTransitionTime":"2026-02-02T10:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:29 crc kubenswrapper[4925]: E0202 10:58:29.471649 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.485219 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.485301 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.485326 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.485359 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.485385 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:29Z","lastTransitionTime":"2026-02-02T10:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:29 crc kubenswrapper[4925]: E0202 10:58:29.506118 4925 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1a35f2f-5b56-42fa-a9f8-72c174fa2172\\\",\\\"systemUUID\\\":\\\"c5eed54a-6e55-454f-8465-b3753cd45b28\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:29Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:29 crc kubenswrapper[4925]: E0202 10:58:29.506250 4925 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.509155 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.509224 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.509244 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.509270 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.509292 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:29Z","lastTransitionTime":"2026-02-02T10:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.612066 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.612137 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.612153 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.612175 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.612192 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:29Z","lastTransitionTime":"2026-02-02T10:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.663539 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.663600 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.663560 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.663539 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:29 crc kubenswrapper[4925]: E0202 10:58:29.663668 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:29 crc kubenswrapper[4925]: E0202 10:58:29.663795 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:29 crc kubenswrapper[4925]: E0202 10:58:29.663894 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:29 crc kubenswrapper[4925]: E0202 10:58:29.663980 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.670162 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 17:06:59.701148174 +0000 UTC Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.715819 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.715865 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.715880 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.715900 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.715914 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:29Z","lastTransitionTime":"2026-02-02T10:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.818437 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.818474 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.818485 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.818501 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.818511 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:29Z","lastTransitionTime":"2026-02-02T10:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.921870 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.921961 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.921980 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.922003 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:29 crc kubenswrapper[4925]: I0202 10:58:29.922021 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:29Z","lastTransitionTime":"2026-02-02T10:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.025915 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.025986 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.026011 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.026040 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.026060 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:30Z","lastTransitionTime":"2026-02-02T10:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.128751 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.128833 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.128876 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.128897 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.128912 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:30Z","lastTransitionTime":"2026-02-02T10:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.231952 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.231987 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.231997 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.232047 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.232057 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:30Z","lastTransitionTime":"2026-02-02T10:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.334100 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.334174 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.334192 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.334214 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.334232 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:30Z","lastTransitionTime":"2026-02-02T10:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.436949 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.437007 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.437025 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.437048 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.437064 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:30Z","lastTransitionTime":"2026-02-02T10:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.539895 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.539960 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.539968 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.539983 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.539992 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:30Z","lastTransitionTime":"2026-02-02T10:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.643160 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.643214 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.643233 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.643298 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.643317 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:30Z","lastTransitionTime":"2026-02-02T10:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.664909 4925 scope.go:117] "RemoveContainer" containerID="5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.670475 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 19:38:24.838557553 +0000 UTC Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.746928 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.747034 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.747059 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.747122 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.747147 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:30Z","lastTransitionTime":"2026-02-02T10:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.849860 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.849931 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.849946 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.849964 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.849976 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:30Z","lastTransitionTime":"2026-02-02T10:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.952500 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.952572 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.952591 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.952616 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:30 crc kubenswrapper[4925]: I0202 10:58:30.952634 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:30Z","lastTransitionTime":"2026-02-02T10:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.055183 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.055223 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.055252 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.055268 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.055278 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:31Z","lastTransitionTime":"2026-02-02T10:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.157316 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.157359 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.157368 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.157384 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.157391 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:31Z","lastTransitionTime":"2026-02-02T10:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.253334 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovnkube-controller/2.log" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.256212 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerStarted","Data":"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d"} Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.256710 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.263572 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.263606 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.263619 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.263633 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.263646 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:31Z","lastTransitionTime":"2026-02-02T10:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.270440 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.285623 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.297152 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.308470 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.324521 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdc3f2b2681e98e48a3d3a5a2c79702766436ccef4ef7cd49600a53b58ca6032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:23Z\\\",\\\"message\\\":\\\"2026-02-02T10:57:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e3d0224f-9b44-4a49-842d-033f4321b2b6\\\\n2026-02-02T10:57:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e3d0224f-9b44-4a49-842d-033f4321b2b6 to /host/opt/cni/bin/\\\\n2026-02-02T10:57:38Z [verbose] multus-daemon started\\\\n2026-02-02T10:57:38Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:58:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.335984 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.350416 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.362881 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.365675 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.365705 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.365716 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.365732 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.365744 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:31Z","lastTransitionTime":"2026-02-02T10:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.376773 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.387428 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7cbcd3c-f8cf-4f96-98aa-014a785a8924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff7ae920a42d3188ef7bcd99aa3c4bd344f55fd90a9ae9b95411db6b6d30de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139493bf9b644468f00ef7346d25ede753332f6401fb46c8ea3d5118de8fbdaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6be1a1c8fa8650db2277393fecfd53a6d3dac682ec792eddf1aea329fcf56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.400070 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.410051 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.421580 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.434418 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.443257 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.458909 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:04Z\\\",\\\"message\\\":\\\"e.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0065992f7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0202 10:58:04.643008 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:58:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.468153 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.468187 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.468199 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.468214 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.468224 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:31Z","lastTransitionTime":"2026-02-02T10:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.473818 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.483652 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f5f45da-c64d-4b11-9648-2dc7a4f34f93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce4cd6d19d406e202c1d4b56b6368afe79f5308cb92de982830d65a94cf66aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://725dc27e3ea93d5315830738921a7c14e25b046f99505253cda2a62b64c483be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725dc27e3ea93d5315830738921a7c14e25b046f99505253cda2a62b64c483be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.510208 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.571607 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.571644 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.571654 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.571670 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.571680 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:31Z","lastTransitionTime":"2026-02-02T10:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.664145 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.664188 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.664208 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:31 crc kubenswrapper[4925]: E0202 10:58:31.664278 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.664352 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:31 crc kubenswrapper[4925]: E0202 10:58:31.664427 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:31 crc kubenswrapper[4925]: E0202 10:58:31.664511 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:31 crc kubenswrapper[4925]: E0202 10:58:31.664612 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.671376 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:24:21.330846454 +0000 UTC Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.674223 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.674249 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.674259 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.674290 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.674302 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:31Z","lastTransitionTime":"2026-02-02T10:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.776634 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.776672 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.776683 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.776716 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.776727 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:31Z","lastTransitionTime":"2026-02-02T10:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.879280 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.879350 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.879362 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.879379 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.879391 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:31Z","lastTransitionTime":"2026-02-02T10:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.982365 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.982444 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.982462 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.982487 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:31 crc kubenswrapper[4925]: I0202 10:58:31.982504 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:31Z","lastTransitionTime":"2026-02-02T10:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.085202 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.085297 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.085320 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.085352 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.085371 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:32Z","lastTransitionTime":"2026-02-02T10:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.188552 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.188616 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.188635 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.188659 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.188677 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:32Z","lastTransitionTime":"2026-02-02T10:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.263533 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovnkube-controller/3.log" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.264714 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovnkube-controller/2.log" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.270324 4925 generic.go:334] "Generic (PLEG): container finished" podID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerID="9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d" exitCode=1 Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.270418 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerDied","Data":"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d"} Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.270540 4925 scope.go:117] "RemoveContainer" containerID="5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.272011 4925 scope.go:117] "RemoveContainer" containerID="9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d" Feb 02 10:58:32 crc kubenswrapper[4925]: E0202 10:58:32.272655 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.287741 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.293893 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.293955 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.293976 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.294007 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.294026 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:32Z","lastTransitionTime":"2026-02-02T10:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.301115 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.319385 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdc3f2b2681e98e48a3d3a5a2c79702766436ccef4ef7cd49600a53b58ca6032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:23Z\\\",\\\"message\\\":\\\"2026-02-02T10:57:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e3d0224f-9b44-4a49-842d-033f4321b2b6\\\\n2026-02-02T10:57:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e3d0224f-9b44-4a49-842d-033f4321b2b6 to /host/opt/cni/bin/\\\\n2026-02-02T10:57:38Z [verbose] multus-daemon started\\\\n2026-02-02T10:57:38Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:58:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.333442 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.353681 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.365682 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.379895 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.393793 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7cbcd3c-f8cf-4f96-98aa-014a785a8924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff7ae920a42d3188ef7bcd99aa3c4bd344f55fd90a9ae9b95411db6b6d30de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139493bf9b644468f00ef7346d25ede753332f6401fb46c8ea3d5118de8fbdaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6be1a1c8fa8650db2277393fecfd53a6d3dac682ec792eddf1aea329fcf56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.397296 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.397350 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.397369 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.397392 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.397406 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:32Z","lastTransitionTime":"2026-02-02T10:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.410727 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.431604 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.447779 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.463377 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.480458 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.499732 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.499782 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.499792 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.499806 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.499815 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:32Z","lastTransitionTime":"2026-02-02T10:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.502976 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5394c4832654e49835791f75cba9e8d87ad634242fca20d5b092859a00a2bd20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:04Z\\\",\\\"message\\\":\\\"e.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0065992f7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0202 10:58:04.643008 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:58:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:31Z\\\",\\\"message\\\":\\\"g admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z]\\\\nI0202 10:58:31.540701 7073 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0202 10:58:31.540707 7073 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0202 10:58:31.540713 7073 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0202 10:58:31.540720 7073 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0202 10:58:31.540725 7073 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0202 10:58:31.540702 7073 services_controller.go:45\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.515994 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.526571 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f5f45da-c64d-4b11-9648-2dc7a4f34f93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce4cd6d19d406e202c1d4b56b6368afe79f5308cb92de982830d65a94cf66aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://725dc27e3ea93d5315830738921a7c14e25b046f99505253cda2a62b64c483be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725dc27e3ea93d5315830738921a7c14e25b046f99505253cda2a62b64c483be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.552149 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.565431 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.577993 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:32Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.602260 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.602298 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.602307 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.602319 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.602352 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:32Z","lastTransitionTime":"2026-02-02T10:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.672494 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 20:44:22.227593339 +0000 UTC Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.704853 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.704909 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.704918 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.704934 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.704944 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:32Z","lastTransitionTime":"2026-02-02T10:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.807045 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.807147 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.807172 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.807200 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.807221 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:32Z","lastTransitionTime":"2026-02-02T10:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.910109 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.910156 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.910168 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.910209 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:32 crc kubenswrapper[4925]: I0202 10:58:32.910224 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:32Z","lastTransitionTime":"2026-02-02T10:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.013147 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.013195 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.013207 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.013228 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.013243 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:33Z","lastTransitionTime":"2026-02-02T10:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.116747 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.116810 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.116827 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.116850 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.116867 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:33Z","lastTransitionTime":"2026-02-02T10:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.219685 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.219984 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.219996 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.220012 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.220023 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:33Z","lastTransitionTime":"2026-02-02T10:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.275806 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovnkube-controller/3.log" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.279753 4925 scope.go:117] "RemoveContainer" containerID="9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d" Feb 02 10:58:33 crc kubenswrapper[4925]: E0202 10:58:33.280064 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.292458 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.306214 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f5f45da-c64d-4b11-9648-2dc7a4f34f93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce4cd6d19d406e202c1d4b56b6368afe79f5308cb92de982830d65a94cf66aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://725dc27e3ea93d5315830738921a7c14e25b046f99505253cda2a62b64c483be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725dc27e3ea93d5315830738921a7c14e25b046f99505253cda2a62b64c483be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.322944 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.323010 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.323031 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.323059 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.323120 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:33Z","lastTransitionTime":"2026-02-02T10:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.340134 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.354330 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.368160 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.380155 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.408023 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:31Z\\\",\\\"message\\\":\\\"g admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z]\\\\nI0202 10:58:31.540701 7073 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0202 10:58:31.540707 7073 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0202 10:58:31.540713 7073 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0202 10:58:31.540720 7073 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0202 10:58:31.540725 7073 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0202 10:58:31.540702 7073 services_controller.go:45\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:58:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.426315 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.426341 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.426350 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.426364 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.426373 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:33Z","lastTransitionTime":"2026-02-02T10:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.428968 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.447203 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.464217 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdc3f2b2681e98e48a3d3a5a2c79702766436ccef4ef7cd49600a53b58ca6032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:23Z\\\",\\\"message\\\":\\\"2026-02-02T10:57:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e3d0224f-9b44-4a49-842d-033f4321b2b6\\\\n2026-02-02T10:57:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e3d0224f-9b44-4a49-842d-033f4321b2b6 to /host/opt/cni/bin/\\\\n2026-02-02T10:57:38Z [verbose] multus-daemon started\\\\n2026-02-02T10:57:38Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:58:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.479760 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.498334 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.511237 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.523277 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7cbcd3c-f8cf-4f96-98aa-014a785a8924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff7ae920a42d3188ef7bcd99aa3c4bd344f55fd90a9ae9b95411db6b6d30de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139493bf9b644468f00ef7346d25ede753332f6401fb46c8ea3d5118de8fbdaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6be1a1c8fa8650db2277393fecfd53a6d3dac682ec792eddf1aea329fcf56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.528541 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.528581 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.528614 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.528631 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.528639 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:33Z","lastTransitionTime":"2026-02-02T10:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.541495 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.556329 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.567962 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.583150 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.596576 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:33Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.631700 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.631750 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.631770 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.631794 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.631815 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:33Z","lastTransitionTime":"2026-02-02T10:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.664143 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.664195 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:33 crc kubenswrapper[4925]: E0202 10:58:33.664339 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.664607 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.664776 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:33 crc kubenswrapper[4925]: E0202 10:58:33.664790 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:33 crc kubenswrapper[4925]: E0202 10:58:33.665025 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:33 crc kubenswrapper[4925]: E0202 10:58:33.665361 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.673701 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 20:56:48.94688729 +0000 UTC Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.735483 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.735545 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.735564 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.735593 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.735612 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:33Z","lastTransitionTime":"2026-02-02T10:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.838838 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.838923 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.838947 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.838979 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.838997 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:33Z","lastTransitionTime":"2026-02-02T10:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.943184 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.943260 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.943286 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.943318 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:33 crc kubenswrapper[4925]: I0202 10:58:33.943343 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:33Z","lastTransitionTime":"2026-02-02T10:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.046938 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.047000 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.047021 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.047048 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.047066 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:34Z","lastTransitionTime":"2026-02-02T10:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.149846 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.149902 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.149925 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.149955 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.149976 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:34Z","lastTransitionTime":"2026-02-02T10:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.252919 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.252974 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.252988 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.253007 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.253022 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:34Z","lastTransitionTime":"2026-02-02T10:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.356038 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.356497 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.356942 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.357255 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.357530 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:34Z","lastTransitionTime":"2026-02-02T10:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.460953 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.461020 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.461038 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.461062 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.461122 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:34Z","lastTransitionTime":"2026-02-02T10:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.564313 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.564389 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.564412 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.564437 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.564455 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:34Z","lastTransitionTime":"2026-02-02T10:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.668278 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.668336 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.668359 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.668389 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.668411 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:34Z","lastTransitionTime":"2026-02-02T10:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.674541 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:32:28.045932471 +0000 UTC Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.684458 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73934878-f30f-4170-aa82-716b163b9928\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1529eb383283c13a35605d626ff5d4265b4f9c35a91b89687fb2c22c9f5f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ab87aa617f4503ba3ff555403dd35472a24a96ba4c75524f974b59ecc4c3637\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335dfd4d0efdc9284010843701a08137159208ac5ec060f11ee84f115239179a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b6dbefe94da607a48581621efa606dc795c2245fad67df4f4a847a4a25fc6ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d15ec9dc389e00b58db08cf13f86f4c8c1f0302655e6b7596afb58e4978be5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98118d9eac686d52892f20c5279aa9e21b8b14eb29e7d32923a2bae78ab5b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc2d1e32978000081849846b5b722442c0eec957ac8af593720392da654175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz6l2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f2xkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.697992 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f5f45da-c64d-4b11-9648-2dc7a4f34f93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ce4cd6d19d406e202c1d4b56b6368afe79f5308cb92de982830d65a94cf66aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://725dc27e3ea93d5315830738921a7c14e25b046f99505253cda2a62b64c483be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://725dc27e3ea93d5315830738921a7c14e25b046f99505253cda2a62b64c483be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.723538 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a8c7617-2b15-46b6-adcb-fc560c1e284c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8896b5fc2475b8e334db61451b90c84015477ca4d3b2aa842a826f14505319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31928abfca70411ec5f293cc14a70e100d7d5eb851fb154a6ead361f4cb81f23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ea24547f5ef38013f5336a570337ed183b9066db7d59dd3f64a1c03c0b5f94d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea71532899e960a5da07a10caea855d9c7c894169ca746628e413580894dc57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdf3c84673035d81fa74540e2f8205e39f65d163107fd1bbfa3ceb13412ccfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54b943458051a98500923d58f797f3e5cf986283f5c933990fa1b0a157f1834c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756c57df295409f84913a1490bb7f5e710d71b39534f303ff5a418ee3d114cf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://095900ca334783e3f35f2e8dbf180b103581607afe735b939d03de804f79561d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.741266 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409c4cd36c5a40e3d016d34c3c7696731331ead190c5b60ed30204d753ebab53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.765349 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.772019 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.772144 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.772168 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.772195 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.772220 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:34Z","lastTransitionTime":"2026-02-02T10:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.790268 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kzdpz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"866ea9ea-2376-4958-899c-c6889eee7137\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2000de9474732bf065d3a2284cc18ea03b64b96a3755f4aabc094e185817a16d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsstd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kzdpz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.818144 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a57c5d12-a4de-413c-a581-4b693550e8c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:31Z\\\",\\\"message\\\":\\\"g admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:31Z is after 2025-08-24T17:21:41Z]\\\\nI0202 10:58:31.540701 7073 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0202 10:58:31.540707 7073 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0202 10:58:31.540713 7073 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0202 10:58:31.540720 7073 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0202 10:58:31.540725 7073 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0202 10:58:31.540702 7073 services_controller.go:45\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:58:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr96t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.848428 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.871334 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-q4rr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b84c6881-f719-456f-9135-7dfb7688a48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdc3f2b2681e98e48a3d3a5a2c79702766436ccef4ef7cd49600a53b58ca6032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:58:23Z\\\",\\\"message\\\":\\\"2026-02-02T10:57:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e3d0224f-9b44-4a49-842d-033f4321b2b6\\\\n2026-02-02T10:57:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e3d0224f-9b44-4a49-842d-033f4321b2b6 to /host/opt/cni/bin/\\\\n2026-02-02T10:57:38Z [verbose] multus-daemon started\\\\n2026-02-02T10:57:38Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:58:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fzzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-q4rr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.876790 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.877118 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.877277 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.877390 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.877479 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:34Z","lastTransitionTime":"2026-02-02T10:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.889830 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lp7j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ec29b9-abb0-4fb5-8463-ff2860921d8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f56bf45d95eca39cd84cfe78037d7da3d2e4f06ef46e07dbcbf63cf78063b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdxnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lp7j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.907129 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9551801b-8926-4673-942b-bcd89aa4eb7b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9be3f3d985139d2a49cffe94b062f9c16519215b55183378b792b4dac522b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57f2e2359c6ad256901d42fcc6e7aff4a8c628f3ba10330b0353528d103d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wjwxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.922210 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f183d5-0612-452e-b762-c841df3a306d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcx5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:46Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hjf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.941836 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7cbcd3c-f8cf-4f96-98aa-014a785a8924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff7ae920a42d3188ef7bcd99aa3c4bd344f55fd90a9ae9b95411db6b6d30de8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139493bf9b644468f00ef7346d25ede753332f6401fb46c8ea3d5118de8fbdaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6be1a1c8fa8650db2277393fecfd53a6d3dac682ec792eddf1aea329fcf56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ff57b0f304a0318b7de8b714fc5dd27b905c02e3fe86ad2e9e6748161b7c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.965992 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"929e2376-c9ca-4fd7-95cc-53d1e78a7480\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:58:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:57:30.833802 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:57:30.833911 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:57:30.834585 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2463194702/tls.crt::/tmp/serving-cert-2463194702/tls.key\\\\\\\"\\\\nI0202 10:57:31.118384 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:57:31.125467 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:57:31.125487 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:57:31.125515 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:57:31.125521 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:57:31.135675 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:57:31.135727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0202 10:57:31.135684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 10:57:31.135737 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:57:31.135746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:57:31.135750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:57:31.135753 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:57:31.135757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 10:57:31.137491 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:57:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.981262 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.981331 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.981351 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.981378 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.981393 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:34Z","lastTransitionTime":"2026-02-02T10:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:34 crc kubenswrapper[4925]: I0202 10:58:34.988387 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acdb7f80-74fc-46b8-8712-6a362a377b50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfe499daab30c53d66b4d71c5e56e499c9ee293e821bce24ef11632a1fbffe18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://717d830ea1b8588cd8db207fe4a45ec84434578e9233383d090b3b5b682608b6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4c170c0788e2d6eb4f7560a9ab5177341befd3f3c44608d93397fe6148fdbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:34Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.007547 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85575c4fdf8d883761743575887450ba4e57843c9c3b18ddaeb2dbac3182789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.025746 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.044045 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3956e1f92eb6898381a99581a3fa90712505ab2c52a75ae834012a0c9c13fe43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://632e4551a7781af03eda981ab10de6eade01ddec2379c34e3c199fb75943e647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.061324 4925 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08797ee8-d3b4-4eed-8482-c19a5b6b87c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3af1148cf9f0c7096a250c09694803e3430a52fe3604343bc07f91e9c3af520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvmwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:57:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fphfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:58:35Z is after 2025-08-24T17:21:41Z" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.085785 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.085848 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.085868 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.085897 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.085919 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:35Z","lastTransitionTime":"2026-02-02T10:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.190185 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.190265 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.190290 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.190324 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.190352 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:35Z","lastTransitionTime":"2026-02-02T10:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.294647 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.294719 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.294739 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.294763 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.294784 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:35Z","lastTransitionTime":"2026-02-02T10:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.398540 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.398609 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.398633 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.398757 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.398779 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:35Z","lastTransitionTime":"2026-02-02T10:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.503788 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.503859 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.503881 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.503910 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.503929 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:35Z","lastTransitionTime":"2026-02-02T10:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.608152 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.608248 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.608275 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.608314 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.608341 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:35Z","lastTransitionTime":"2026-02-02T10:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.626894 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.627044 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:35 crc kubenswrapper[4925]: E0202 10:58:35.627195 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.627146011 +0000 UTC m=+156.631395063 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:58:35 crc kubenswrapper[4925]: E0202 10:58:35.627262 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:58:35 crc kubenswrapper[4925]: E0202 10:58:35.627284 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:58:35 crc kubenswrapper[4925]: E0202 10:58:35.627297 4925 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:58:35 crc kubenswrapper[4925]: E0202 10:58:35.627371 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.627348527 +0000 UTC m=+156.631597489 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.627404 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.627439 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.627549 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:35 crc kubenswrapper[4925]: E0202 10:58:35.627509 4925 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:58:35 crc kubenswrapper[4925]: E0202 10:58:35.627714 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:58:35 crc kubenswrapper[4925]: E0202 10:58:35.627762 4925 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:58:35 crc kubenswrapper[4925]: E0202 10:58:35.627786 4925 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:58:35 crc kubenswrapper[4925]: E0202 10:58:35.627803 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.627747297 +0000 UTC m=+156.631996459 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:58:35 crc kubenswrapper[4925]: E0202 10:58:35.627618 4925 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:58:35 crc kubenswrapper[4925]: E0202 10:58:35.627866 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.62783793 +0000 UTC m=+156.632086932 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:58:35 crc kubenswrapper[4925]: E0202 10:58:35.627955 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.627920602 +0000 UTC m=+156.632169764 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.663798 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.663861 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.663798 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:35 crc kubenswrapper[4925]: E0202 10:58:35.664002 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.665045 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:35 crc kubenswrapper[4925]: E0202 10:58:35.665269 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:35 crc kubenswrapper[4925]: E0202 10:58:35.665505 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:35 crc kubenswrapper[4925]: E0202 10:58:35.665589 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.674975 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 14:18:16.668892731 +0000 UTC Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.711867 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.711931 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.711952 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.711980 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.712000 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:35Z","lastTransitionTime":"2026-02-02T10:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.815654 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.815721 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.815743 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.815774 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.815796 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:35Z","lastTransitionTime":"2026-02-02T10:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.920416 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.920488 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.920508 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.920537 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:35 crc kubenswrapper[4925]: I0202 10:58:35.920557 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:35Z","lastTransitionTime":"2026-02-02T10:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.024187 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.024244 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.024265 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.024291 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.024308 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:36Z","lastTransitionTime":"2026-02-02T10:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.128339 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.128407 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.128422 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.128446 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.128463 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:36Z","lastTransitionTime":"2026-02-02T10:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.232248 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.232311 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.232333 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.232361 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.232380 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:36Z","lastTransitionTime":"2026-02-02T10:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.337312 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.337391 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.337414 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.337446 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.337469 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:36Z","lastTransitionTime":"2026-02-02T10:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.440464 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.440548 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.440566 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.440587 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.440604 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:36Z","lastTransitionTime":"2026-02-02T10:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.544958 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.545027 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.545046 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.545113 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.545134 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:36Z","lastTransitionTime":"2026-02-02T10:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.649497 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.649559 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.649576 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.649597 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.649611 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:36Z","lastTransitionTime":"2026-02-02T10:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.675886 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:13:46.22548234 +0000 UTC Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.753776 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.753861 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.753886 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.753919 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.753947 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:36Z","lastTransitionTime":"2026-02-02T10:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.857419 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.857470 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.857488 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.857514 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.857533 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:36Z","lastTransitionTime":"2026-02-02T10:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.961127 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.961202 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.961227 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.961256 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:36 crc kubenswrapper[4925]: I0202 10:58:36.961276 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:36Z","lastTransitionTime":"2026-02-02T10:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.064738 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.064794 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.064813 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.064839 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.064858 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:37Z","lastTransitionTime":"2026-02-02T10:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.167838 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.167911 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.167932 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.167965 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.167994 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:37Z","lastTransitionTime":"2026-02-02T10:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.272013 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.272155 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.272171 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.272193 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.272202 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:37Z","lastTransitionTime":"2026-02-02T10:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.376655 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.376700 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.376713 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.376733 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.376745 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:37Z","lastTransitionTime":"2026-02-02T10:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.480435 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.480479 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.480491 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.480508 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.480520 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:37Z","lastTransitionTime":"2026-02-02T10:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.584487 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.584570 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.584597 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.584630 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.584653 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:37Z","lastTransitionTime":"2026-02-02T10:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.663902 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.664012 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.664032 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.664015 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:37 crc kubenswrapper[4925]: E0202 10:58:37.664318 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:37 crc kubenswrapper[4925]: E0202 10:58:37.664168 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:37 crc kubenswrapper[4925]: E0202 10:58:37.664395 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:37 crc kubenswrapper[4925]: E0202 10:58:37.664490 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.676231 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 21:26:31.351165712 +0000 UTC Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.687709 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.687744 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.687757 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.687773 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.687786 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:37Z","lastTransitionTime":"2026-02-02T10:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.792175 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.792267 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.792290 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.792322 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.792347 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:37Z","lastTransitionTime":"2026-02-02T10:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.895412 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.895470 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.895484 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.895504 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.895517 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:37Z","lastTransitionTime":"2026-02-02T10:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.997798 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.997836 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.997847 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.997860 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:37 crc kubenswrapper[4925]: I0202 10:58:37.997869 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:37Z","lastTransitionTime":"2026-02-02T10:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.100397 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.100450 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.100466 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.100486 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.100500 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:38Z","lastTransitionTime":"2026-02-02T10:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.203067 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.203125 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.203133 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.203147 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.203157 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:38Z","lastTransitionTime":"2026-02-02T10:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.305525 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.305570 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.305579 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.305594 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.305608 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:38Z","lastTransitionTime":"2026-02-02T10:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.409112 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.409184 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.409207 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.409237 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.409255 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:38Z","lastTransitionTime":"2026-02-02T10:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.512800 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.512881 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.512906 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.512942 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.512965 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:38Z","lastTransitionTime":"2026-02-02T10:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.617218 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.617279 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.617288 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.617303 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.617314 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:38Z","lastTransitionTime":"2026-02-02T10:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.677238 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 22:40:16.195370359 +0000 UTC Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.720929 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.720966 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.720977 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.720992 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.721021 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:38Z","lastTransitionTime":"2026-02-02T10:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.823513 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.823572 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.823591 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.823615 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.823636 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:38Z","lastTransitionTime":"2026-02-02T10:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.927289 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.927457 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.927496 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.927531 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:38 crc kubenswrapper[4925]: I0202 10:58:38.927559 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:38Z","lastTransitionTime":"2026-02-02T10:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.030878 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.030919 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.030930 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.030947 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.030956 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:39Z","lastTransitionTime":"2026-02-02T10:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.133912 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.134126 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.134191 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.134232 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.134242 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:39Z","lastTransitionTime":"2026-02-02T10:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.237111 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.237158 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.237171 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.237188 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.237201 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:39Z","lastTransitionTime":"2026-02-02T10:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.339967 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.340011 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.340023 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.340040 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.340051 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:39Z","lastTransitionTime":"2026-02-02T10:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.442554 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.442619 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.442637 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.442661 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.442677 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:39Z","lastTransitionTime":"2026-02-02T10:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.545237 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.545284 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.545296 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.545353 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.545367 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:39Z","lastTransitionTime":"2026-02-02T10:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.647998 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.648033 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.648042 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.648057 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.648068 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:39Z","lastTransitionTime":"2026-02-02T10:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.663973 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.664020 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.664216 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.664279 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:39 crc kubenswrapper[4925]: E0202 10:58:39.664461 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:39 crc kubenswrapper[4925]: E0202 10:58:39.664620 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:39 crc kubenswrapper[4925]: E0202 10:58:39.664709 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:39 crc kubenswrapper[4925]: E0202 10:58:39.664814 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.677487 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 03:52:25.547084436 +0000 UTC Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.751197 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.751240 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.751255 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.751274 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.751287 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:39Z","lastTransitionTime":"2026-02-02T10:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.845344 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.845420 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.845440 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.845471 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.845501 4925 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:58:39Z","lastTransitionTime":"2026-02-02T10:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.904266 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt"] Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.904765 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.907607 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.909392 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.909480 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.909557 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.969558 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-q4rr9" podStartSLOduration=68.969537767 podStartE2EDuration="1m8.969537767s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:39.951302389 +0000 UTC m=+96.955551391" watchObservedRunningTime="2026-02-02 10:58:39.969537767 +0000 UTC m=+96.973786739" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.987448 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/583ab2c3-9cfa-4eb4-a060-63902df1ee91-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5v9wt\" (UID: \"583ab2c3-9cfa-4eb4-a060-63902df1ee91\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.987519 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/583ab2c3-9cfa-4eb4-a060-63902df1ee91-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5v9wt\" (UID: \"583ab2c3-9cfa-4eb4-a060-63902df1ee91\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.987576 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/583ab2c3-9cfa-4eb4-a060-63902df1ee91-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5v9wt\" (UID: \"583ab2c3-9cfa-4eb4-a060-63902df1ee91\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.987610 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/583ab2c3-9cfa-4eb4-a060-63902df1ee91-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5v9wt\" (UID: \"583ab2c3-9cfa-4eb4-a060-63902df1ee91\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.987673 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/583ab2c3-9cfa-4eb4-a060-63902df1ee91-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5v9wt\" (UID: \"583ab2c3-9cfa-4eb4-a060-63902df1ee91\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.991419 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wjwxt" podStartSLOduration=68.991392082 podStartE2EDuration="1m8.991392082s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:39.991358971 +0000 UTC m=+96.995607943" watchObservedRunningTime="2026-02-02 10:58:39.991392082 +0000 UTC m=+96.995641054" Feb 02 10:58:39 crc kubenswrapper[4925]: I0202 10:58:39.991609 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lp7j8" podStartSLOduration=68.991601387 podStartE2EDuration="1m8.991601387s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:39.969857355 +0000 UTC m=+96.974106327" watchObservedRunningTime="2026-02-02 10:58:39.991601387 +0000 UTC m=+96.995850369" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.039808 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.039791417 podStartE2EDuration="38.039791417s" podCreationTimestamp="2026-02-02 10:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:40.039212872 +0000 UTC m=+97.043461864" watchObservedRunningTime="2026-02-02 10:58:40.039791417 +0000 UTC m=+97.044040379" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.062111 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.062064543 podStartE2EDuration="1m9.062064543s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:40.06195459 +0000 UTC m=+97.066203592" watchObservedRunningTime="2026-02-02 10:58:40.062064543 +0000 UTC m=+97.066313525" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.081225 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=65.081207496 podStartE2EDuration="1m5.081207496s" podCreationTimestamp="2026-02-02 10:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:40.080058315 +0000 UTC m=+97.084307287" watchObservedRunningTime="2026-02-02 10:58:40.081207496 +0000 UTC m=+97.085456468" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.090684 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/583ab2c3-9cfa-4eb4-a060-63902df1ee91-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5v9wt\" (UID: \"583ab2c3-9cfa-4eb4-a060-63902df1ee91\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.090828 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/583ab2c3-9cfa-4eb4-a060-63902df1ee91-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5v9wt\" (UID: \"583ab2c3-9cfa-4eb4-a060-63902df1ee91\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.090883 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/583ab2c3-9cfa-4eb4-a060-63902df1ee91-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5v9wt\" (UID: \"583ab2c3-9cfa-4eb4-a060-63902df1ee91\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.090931 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/583ab2c3-9cfa-4eb4-a060-63902df1ee91-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5v9wt\" (UID: \"583ab2c3-9cfa-4eb4-a060-63902df1ee91\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.090947 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/583ab2c3-9cfa-4eb4-a060-63902df1ee91-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5v9wt\" (UID: \"583ab2c3-9cfa-4eb4-a060-63902df1ee91\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.090974 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/583ab2c3-9cfa-4eb4-a060-63902df1ee91-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5v9wt\" (UID: \"583ab2c3-9cfa-4eb4-a060-63902df1ee91\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.091765 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/583ab2c3-9cfa-4eb4-a060-63902df1ee91-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5v9wt\" (UID: \"583ab2c3-9cfa-4eb4-a060-63902df1ee91\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.092462 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/583ab2c3-9cfa-4eb4-a060-63902df1ee91-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5v9wt\" (UID: \"583ab2c3-9cfa-4eb4-a060-63902df1ee91\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.101891 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/583ab2c3-9cfa-4eb4-a060-63902df1ee91-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5v9wt\" (UID: \"583ab2c3-9cfa-4eb4-a060-63902df1ee91\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.112936 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/583ab2c3-9cfa-4eb4-a060-63902df1ee91-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5v9wt\" (UID: \"583ab2c3-9cfa-4eb4-a060-63902df1ee91\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.186887 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podStartSLOduration=69.186864934 podStartE2EDuration="1m9.186864934s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:40.160115148 +0000 UTC m=+97.164364130" watchObservedRunningTime="2026-02-02 10:58:40.186864934 +0000 UTC m=+97.191113906" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.187623 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.187618304 podStartE2EDuration="16.187618304s" podCreationTimestamp="2026-02-02 10:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:40.187032458 +0000 UTC m=+97.191281430" watchObservedRunningTime="2026-02-02 10:58:40.187618304 +0000 UTC m=+97.191867276" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.238411 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.247305 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=67.247290501 podStartE2EDuration="1m7.247290501s" podCreationTimestamp="2026-02-02 10:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:40.230861431 +0000 UTC m=+97.235110393" watchObservedRunningTime="2026-02-02 10:58:40.247290501 +0000 UTC m=+97.251539473" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.302317 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-kzdpz" podStartSLOduration=69.302296993 podStartE2EDuration="1m9.302296993s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:40.274013076 +0000 UTC m=+97.278262038" watchObservedRunningTime="2026-02-02 10:58:40.302296993 +0000 UTC m=+97.306545955" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.306770 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" event={"ID":"583ab2c3-9cfa-4eb4-a060-63902df1ee91","Type":"ContainerStarted","Data":"2317ec34763e3551f20943f91a639fb26bd042b3ce23ca50eb262986ee3f7112"} Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.328808 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-f2xkn" podStartSLOduration=69.328786072 podStartE2EDuration="1m9.328786072s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:40.327374305 +0000 UTC m=+97.331623267" watchObservedRunningTime="2026-02-02 10:58:40.328786072 +0000 UTC m=+97.333035034" Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.678577 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:58:09.281932474 +0000 UTC Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.678636 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 02 10:58:40 crc kubenswrapper[4925]: I0202 10:58:40.688122 4925 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 10:58:41 crc kubenswrapper[4925]: I0202 10:58:41.313188 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" event={"ID":"583ab2c3-9cfa-4eb4-a060-63902df1ee91","Type":"ContainerStarted","Data":"8fca628c685f8ee091811bf232c99a34b947f536762ee3db12ed07c2cc521d4a"} Feb 02 10:58:41 crc kubenswrapper[4925]: I0202 10:58:41.342986 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5v9wt" podStartSLOduration=70.342960017 podStartE2EDuration="1m10.342960017s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:41.341619332 +0000 UTC m=+98.345881434" watchObservedRunningTime="2026-02-02 10:58:41.342960017 +0000 UTC m=+98.347209019" Feb 02 10:58:41 crc kubenswrapper[4925]: I0202 10:58:41.663606 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:41 crc kubenswrapper[4925]: I0202 10:58:41.663687 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:41 crc kubenswrapper[4925]: I0202 10:58:41.663747 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:41 crc kubenswrapper[4925]: E0202 10:58:41.663871 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:41 crc kubenswrapper[4925]: I0202 10:58:41.663888 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:41 crc kubenswrapper[4925]: E0202 10:58:41.664036 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:41 crc kubenswrapper[4925]: E0202 10:58:41.664209 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:41 crc kubenswrapper[4925]: E0202 10:58:41.664451 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:43 crc kubenswrapper[4925]: I0202 10:58:43.663331 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:43 crc kubenswrapper[4925]: I0202 10:58:43.663415 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:43 crc kubenswrapper[4925]: I0202 10:58:43.663340 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:43 crc kubenswrapper[4925]: E0202 10:58:43.663520 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:43 crc kubenswrapper[4925]: I0202 10:58:43.663621 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:43 crc kubenswrapper[4925]: E0202 10:58:43.663779 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:43 crc kubenswrapper[4925]: E0202 10:58:43.664131 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:43 crc kubenswrapper[4925]: E0202 10:58:43.664396 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:45 crc kubenswrapper[4925]: I0202 10:58:45.663646 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:45 crc kubenswrapper[4925]: I0202 10:58:45.663693 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:45 crc kubenswrapper[4925]: I0202 10:58:45.663701 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:45 crc kubenswrapper[4925]: I0202 10:58:45.663748 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:45 crc kubenswrapper[4925]: E0202 10:58:45.663832 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:45 crc kubenswrapper[4925]: E0202 10:58:45.663971 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:45 crc kubenswrapper[4925]: E0202 10:58:45.664549 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:45 crc kubenswrapper[4925]: E0202 10:58:45.664632 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:45 crc kubenswrapper[4925]: I0202 10:58:45.665191 4925 scope.go:117] "RemoveContainer" containerID="9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d" Feb 02 10:58:45 crc kubenswrapper[4925]: E0202 10:58:45.665431 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" Feb 02 10:58:47 crc kubenswrapper[4925]: I0202 10:58:47.663319 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:47 crc kubenswrapper[4925]: I0202 10:58:47.663371 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:47 crc kubenswrapper[4925]: I0202 10:58:47.663435 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:47 crc kubenswrapper[4925]: E0202 10:58:47.663490 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:47 crc kubenswrapper[4925]: I0202 10:58:47.663327 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:47 crc kubenswrapper[4925]: E0202 10:58:47.663761 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:47 crc kubenswrapper[4925]: E0202 10:58:47.663787 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:47 crc kubenswrapper[4925]: E0202 10:58:47.663930 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:49 crc kubenswrapper[4925]: I0202 10:58:49.664236 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:49 crc kubenswrapper[4925]: I0202 10:58:49.664455 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:49 crc kubenswrapper[4925]: I0202 10:58:49.664472 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:49 crc kubenswrapper[4925]: E0202 10:58:49.664622 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:49 crc kubenswrapper[4925]: E0202 10:58:49.664860 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:49 crc kubenswrapper[4925]: I0202 10:58:49.664888 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:49 crc kubenswrapper[4925]: E0202 10:58:49.665148 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:49 crc kubenswrapper[4925]: E0202 10:58:49.665291 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:50 crc kubenswrapper[4925]: I0202 10:58:50.212796 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs\") pod \"network-metrics-daemon-hjf4s\" (UID: \"39f183d5-0612-452e-b762-c841df3a306d\") " pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:50 crc kubenswrapper[4925]: E0202 10:58:50.213129 4925 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:58:50 crc kubenswrapper[4925]: E0202 10:58:50.213566 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs podName:39f183d5-0612-452e-b762-c841df3a306d nodeName:}" failed. No retries permitted until 2026-02-02 10:59:54.213527161 +0000 UTC m=+171.217776163 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs") pod "network-metrics-daemon-hjf4s" (UID: "39f183d5-0612-452e-b762-c841df3a306d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:58:51 crc kubenswrapper[4925]: I0202 10:58:51.663891 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:51 crc kubenswrapper[4925]: I0202 10:58:51.663962 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:51 crc kubenswrapper[4925]: I0202 10:58:51.664026 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:51 crc kubenswrapper[4925]: E0202 10:58:51.664158 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:51 crc kubenswrapper[4925]: E0202 10:58:51.664528 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:51 crc kubenswrapper[4925]: E0202 10:58:51.664678 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:51 crc kubenswrapper[4925]: I0202 10:58:51.663848 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:51 crc kubenswrapper[4925]: E0202 10:58:51.664894 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:53 crc kubenswrapper[4925]: I0202 10:58:53.663714 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:53 crc kubenswrapper[4925]: I0202 10:58:53.663728 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:53 crc kubenswrapper[4925]: I0202 10:58:53.663793 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:53 crc kubenswrapper[4925]: E0202 10:58:53.663944 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:53 crc kubenswrapper[4925]: I0202 10:58:53.663977 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:53 crc kubenswrapper[4925]: E0202 10:58:53.664167 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:53 crc kubenswrapper[4925]: E0202 10:58:53.664357 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:53 crc kubenswrapper[4925]: E0202 10:58:53.664481 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:55 crc kubenswrapper[4925]: I0202 10:58:55.663264 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:55 crc kubenswrapper[4925]: I0202 10:58:55.663309 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:55 crc kubenswrapper[4925]: I0202 10:58:55.663332 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:55 crc kubenswrapper[4925]: E0202 10:58:55.663474 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:55 crc kubenswrapper[4925]: I0202 10:58:55.663488 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:55 crc kubenswrapper[4925]: E0202 10:58:55.663584 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:55 crc kubenswrapper[4925]: E0202 10:58:55.663761 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:55 crc kubenswrapper[4925]: E0202 10:58:55.663867 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:57 crc kubenswrapper[4925]: I0202 10:58:57.664222 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:57 crc kubenswrapper[4925]: E0202 10:58:57.664932 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:58:57 crc kubenswrapper[4925]: I0202 10:58:57.664274 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:57 crc kubenswrapper[4925]: E0202 10:58:57.665220 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:57 crc kubenswrapper[4925]: I0202 10:58:57.664239 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:57 crc kubenswrapper[4925]: E0202 10:58:57.665443 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:57 crc kubenswrapper[4925]: I0202 10:58:57.664296 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:57 crc kubenswrapper[4925]: E0202 10:58:57.665672 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:58 crc kubenswrapper[4925]: I0202 10:58:58.665580 4925 scope.go:117] "RemoveContainer" containerID="9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d" Feb 02 10:58:58 crc kubenswrapper[4925]: E0202 10:58:58.665922 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" Feb 02 10:58:59 crc kubenswrapper[4925]: I0202 10:58:59.664038 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:58:59 crc kubenswrapper[4925]: I0202 10:58:59.664171 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:58:59 crc kubenswrapper[4925]: I0202 10:58:59.664189 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:58:59 crc kubenswrapper[4925]: I0202 10:58:59.664189 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:58:59 crc kubenswrapper[4925]: E0202 10:58:59.664385 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:58:59 crc kubenswrapper[4925]: E0202 10:58:59.664521 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:58:59 crc kubenswrapper[4925]: E0202 10:58:59.664786 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:58:59 crc kubenswrapper[4925]: E0202 10:58:59.664877 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:59:01 crc kubenswrapper[4925]: I0202 10:59:01.664034 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:01 crc kubenswrapper[4925]: I0202 10:59:01.664147 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:01 crc kubenswrapper[4925]: I0202 10:59:01.664172 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:01 crc kubenswrapper[4925]: I0202 10:59:01.664034 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:01 crc kubenswrapper[4925]: E0202 10:59:01.664228 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:59:01 crc kubenswrapper[4925]: E0202 10:59:01.664334 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:59:01 crc kubenswrapper[4925]: E0202 10:59:01.664495 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:59:01 crc kubenswrapper[4925]: E0202 10:59:01.664595 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:59:03 crc kubenswrapper[4925]: I0202 10:59:03.663370 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:03 crc kubenswrapper[4925]: E0202 10:59:03.663526 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:59:03 crc kubenswrapper[4925]: I0202 10:59:03.663370 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:03 crc kubenswrapper[4925]: I0202 10:59:03.663385 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:03 crc kubenswrapper[4925]: E0202 10:59:03.663616 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:59:03 crc kubenswrapper[4925]: E0202 10:59:03.663813 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:59:03 crc kubenswrapper[4925]: I0202 10:59:03.664260 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:03 crc kubenswrapper[4925]: E0202 10:59:03.664360 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:59:04 crc kubenswrapper[4925]: E0202 10:59:04.671960 4925 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 02 10:59:05 crc kubenswrapper[4925]: E0202 10:59:05.315953 4925 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:59:05 crc kubenswrapper[4925]: I0202 10:59:05.664185 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:05 crc kubenswrapper[4925]: I0202 10:59:05.664202 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:05 crc kubenswrapper[4925]: I0202 10:59:05.664221 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:05 crc kubenswrapper[4925]: E0202 10:59:05.664516 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:59:05 crc kubenswrapper[4925]: E0202 10:59:05.664371 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:59:05 crc kubenswrapper[4925]: E0202 10:59:05.664563 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:59:05 crc kubenswrapper[4925]: I0202 10:59:05.664220 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:05 crc kubenswrapper[4925]: E0202 10:59:05.664645 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:59:07 crc kubenswrapper[4925]: I0202 10:59:07.664362 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:07 crc kubenswrapper[4925]: I0202 10:59:07.664422 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:07 crc kubenswrapper[4925]: I0202 10:59:07.664427 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:07 crc kubenswrapper[4925]: E0202 10:59:07.664565 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:59:07 crc kubenswrapper[4925]: I0202 10:59:07.665077 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:07 crc kubenswrapper[4925]: E0202 10:59:07.665229 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:59:07 crc kubenswrapper[4925]: E0202 10:59:07.665291 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:59:07 crc kubenswrapper[4925]: E0202 10:59:07.665475 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:59:09 crc kubenswrapper[4925]: I0202 10:59:09.404148 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q4rr9_b84c6881-f719-456f-9135-7dfb7688a48d/kube-multus/1.log" Feb 02 10:59:09 crc kubenswrapper[4925]: I0202 10:59:09.406235 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q4rr9_b84c6881-f719-456f-9135-7dfb7688a48d/kube-multus/0.log" Feb 02 10:59:09 crc kubenswrapper[4925]: I0202 10:59:09.406280 4925 generic.go:334] "Generic (PLEG): container finished" podID="b84c6881-f719-456f-9135-7dfb7688a48d" containerID="fdc3f2b2681e98e48a3d3a5a2c79702766436ccef4ef7cd49600a53b58ca6032" exitCode=1 Feb 02 10:59:09 crc kubenswrapper[4925]: I0202 10:59:09.406314 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q4rr9" event={"ID":"b84c6881-f719-456f-9135-7dfb7688a48d","Type":"ContainerDied","Data":"fdc3f2b2681e98e48a3d3a5a2c79702766436ccef4ef7cd49600a53b58ca6032"} Feb 02 10:59:09 crc kubenswrapper[4925]: I0202 10:59:09.406360 4925 scope.go:117] "RemoveContainer" containerID="3c0cfaf235205d588cc350459bd5b09aadc160f299218c7b60949907a38c876e" Feb 02 10:59:09 crc kubenswrapper[4925]: I0202 10:59:09.407540 4925 scope.go:117] "RemoveContainer" containerID="fdc3f2b2681e98e48a3d3a5a2c79702766436ccef4ef7cd49600a53b58ca6032" Feb 02 10:59:09 crc kubenswrapper[4925]: E0202 10:59:09.407951 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-q4rr9_openshift-multus(b84c6881-f719-456f-9135-7dfb7688a48d)\"" pod="openshift-multus/multus-q4rr9" podUID="b84c6881-f719-456f-9135-7dfb7688a48d" Feb 02 10:59:09 crc kubenswrapper[4925]: I0202 10:59:09.664149 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:09 crc kubenswrapper[4925]: I0202 10:59:09.664247 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:09 crc kubenswrapper[4925]: I0202 10:59:09.664155 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:09 crc kubenswrapper[4925]: E0202 10:59:09.664305 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:59:09 crc kubenswrapper[4925]: E0202 10:59:09.664423 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:59:09 crc kubenswrapper[4925]: E0202 10:59:09.664471 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:59:09 crc kubenswrapper[4925]: I0202 10:59:09.664658 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:09 crc kubenswrapper[4925]: E0202 10:59:09.664726 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:59:10 crc kubenswrapper[4925]: E0202 10:59:10.318178 4925 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:59:10 crc kubenswrapper[4925]: I0202 10:59:10.411364 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q4rr9_b84c6881-f719-456f-9135-7dfb7688a48d/kube-multus/1.log" Feb 02 10:59:10 crc kubenswrapper[4925]: I0202 10:59:10.665357 4925 scope.go:117] "RemoveContainer" containerID="9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d" Feb 02 10:59:10 crc kubenswrapper[4925]: E0202 10:59:10.665614 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlpb_openshift-ovn-kubernetes(a57c5d12-a4de-413c-a581-4b693550e8c3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" Feb 02 10:59:11 crc kubenswrapper[4925]: I0202 10:59:11.663407 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:11 crc kubenswrapper[4925]: I0202 10:59:11.663475 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:11 crc kubenswrapper[4925]: I0202 10:59:11.663521 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:11 crc kubenswrapper[4925]: I0202 10:59:11.663475 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:11 crc kubenswrapper[4925]: E0202 10:59:11.663602 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:59:11 crc kubenswrapper[4925]: E0202 10:59:11.663737 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:59:11 crc kubenswrapper[4925]: E0202 10:59:11.663843 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:59:11 crc kubenswrapper[4925]: E0202 10:59:11.663937 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:59:13 crc kubenswrapper[4925]: I0202 10:59:13.663500 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:13 crc kubenswrapper[4925]: I0202 10:59:13.663617 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:13 crc kubenswrapper[4925]: I0202 10:59:13.663694 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:13 crc kubenswrapper[4925]: E0202 10:59:13.663704 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:59:13 crc kubenswrapper[4925]: I0202 10:59:13.663798 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:13 crc kubenswrapper[4925]: E0202 10:59:13.663802 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:59:13 crc kubenswrapper[4925]: E0202 10:59:13.663892 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:59:13 crc kubenswrapper[4925]: E0202 10:59:13.663976 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:59:15 crc kubenswrapper[4925]: E0202 10:59:15.318832 4925 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:59:15 crc kubenswrapper[4925]: I0202 10:59:15.664102 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:15 crc kubenswrapper[4925]: I0202 10:59:15.664137 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:15 crc kubenswrapper[4925]: I0202 10:59:15.664138 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:15 crc kubenswrapper[4925]: I0202 10:59:15.664228 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:15 crc kubenswrapper[4925]: E0202 10:59:15.664419 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:59:15 crc kubenswrapper[4925]: E0202 10:59:15.664532 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:59:15 crc kubenswrapper[4925]: E0202 10:59:15.664615 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:59:15 crc kubenswrapper[4925]: E0202 10:59:15.664672 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:59:17 crc kubenswrapper[4925]: I0202 10:59:17.664360 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:17 crc kubenswrapper[4925]: I0202 10:59:17.664369 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:17 crc kubenswrapper[4925]: I0202 10:59:17.664439 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:17 crc kubenswrapper[4925]: I0202 10:59:17.664626 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:17 crc kubenswrapper[4925]: E0202 10:59:17.664677 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:59:17 crc kubenswrapper[4925]: E0202 10:59:17.664756 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:59:17 crc kubenswrapper[4925]: E0202 10:59:17.664975 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:59:17 crc kubenswrapper[4925]: E0202 10:59:17.665101 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:59:19 crc kubenswrapper[4925]: I0202 10:59:19.663813 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:19 crc kubenswrapper[4925]: I0202 10:59:19.663856 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:19 crc kubenswrapper[4925]: I0202 10:59:19.663914 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:19 crc kubenswrapper[4925]: I0202 10:59:19.663977 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:19 crc kubenswrapper[4925]: E0202 10:59:19.664360 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:59:19 crc kubenswrapper[4925]: I0202 10:59:19.664449 4925 scope.go:117] "RemoveContainer" containerID="fdc3f2b2681e98e48a3d3a5a2c79702766436ccef4ef7cd49600a53b58ca6032" Feb 02 10:59:19 crc kubenswrapper[4925]: E0202 10:59:19.664596 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:59:19 crc kubenswrapper[4925]: E0202 10:59:19.664740 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:59:19 crc kubenswrapper[4925]: E0202 10:59:19.665040 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:59:20 crc kubenswrapper[4925]: E0202 10:59:20.320611 4925 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:59:20 crc kubenswrapper[4925]: I0202 10:59:20.451870 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q4rr9_b84c6881-f719-456f-9135-7dfb7688a48d/kube-multus/1.log" Feb 02 10:59:20 crc kubenswrapper[4925]: I0202 10:59:20.451954 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q4rr9" event={"ID":"b84c6881-f719-456f-9135-7dfb7688a48d","Type":"ContainerStarted","Data":"b84be1334f2ff06bf521e5ecdedb24f9d1ffe0fd8cd6bd23e7e3ee59feabaae5"} Feb 02 10:59:21 crc kubenswrapper[4925]: I0202 10:59:21.664200 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:21 crc kubenswrapper[4925]: I0202 10:59:21.664263 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:21 crc kubenswrapper[4925]: I0202 10:59:21.664268 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:21 crc kubenswrapper[4925]: E0202 10:59:21.664356 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:59:21 crc kubenswrapper[4925]: I0202 10:59:21.664188 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:21 crc kubenswrapper[4925]: E0202 10:59:21.664475 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:59:21 crc kubenswrapper[4925]: E0202 10:59:21.664695 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:59:21 crc kubenswrapper[4925]: E0202 10:59:21.664754 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:59:23 crc kubenswrapper[4925]: I0202 10:59:23.663959 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:23 crc kubenswrapper[4925]: I0202 10:59:23.664020 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:23 crc kubenswrapper[4925]: E0202 10:59:23.664218 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:59:23 crc kubenswrapper[4925]: I0202 10:59:23.664241 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:23 crc kubenswrapper[4925]: I0202 10:59:23.664250 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:23 crc kubenswrapper[4925]: E0202 10:59:23.664920 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:59:23 crc kubenswrapper[4925]: E0202 10:59:23.665141 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:59:23 crc kubenswrapper[4925]: E0202 10:59:23.665831 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:59:23 crc kubenswrapper[4925]: I0202 10:59:23.666365 4925 scope.go:117] "RemoveContainer" containerID="9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d" Feb 02 10:59:24 crc kubenswrapper[4925]: I0202 10:59:24.464923 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovnkube-controller/3.log" Feb 02 10:59:24 crc kubenswrapper[4925]: I0202 10:59:24.467617 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerStarted","Data":"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819"} Feb 02 10:59:24 crc kubenswrapper[4925]: I0202 10:59:24.468511 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:59:24 crc kubenswrapper[4925]: I0202 10:59:24.500071 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podStartSLOduration=113.500050036 podStartE2EDuration="1m53.500050036s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:24.492796225 +0000 UTC m=+141.497045207" watchObservedRunningTime="2026-02-02 10:59:24.500050036 +0000 UTC m=+141.504298998" Feb 02 10:59:24 crc kubenswrapper[4925]: I0202 10:59:24.500861 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hjf4s"] Feb 02 10:59:24 crc kubenswrapper[4925]: I0202 10:59:24.500972 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:24 crc kubenswrapper[4925]: E0202 10:59:24.501062 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:59:25 crc kubenswrapper[4925]: E0202 10:59:25.321981 4925 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:59:25 crc kubenswrapper[4925]: I0202 10:59:25.664030 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:25 crc kubenswrapper[4925]: E0202 10:59:25.664282 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:59:25 crc kubenswrapper[4925]: I0202 10:59:25.664457 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:25 crc kubenswrapper[4925]: I0202 10:59:25.664544 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:25 crc kubenswrapper[4925]: E0202 10:59:25.664640 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:59:25 crc kubenswrapper[4925]: E0202 10:59:25.664713 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:59:26 crc kubenswrapper[4925]: I0202 10:59:26.664245 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:26 crc kubenswrapper[4925]: E0202 10:59:26.664458 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:59:27 crc kubenswrapper[4925]: I0202 10:59:27.663646 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:27 crc kubenswrapper[4925]: I0202 10:59:27.663706 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:27 crc kubenswrapper[4925]: I0202 10:59:27.663673 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:27 crc kubenswrapper[4925]: E0202 10:59:27.663849 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:59:27 crc kubenswrapper[4925]: E0202 10:59:27.663916 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:59:27 crc kubenswrapper[4925]: E0202 10:59:27.663976 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:59:28 crc kubenswrapper[4925]: I0202 10:59:28.664136 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:28 crc kubenswrapper[4925]: E0202 10:59:28.664407 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hjf4s" podUID="39f183d5-0612-452e-b762-c841df3a306d" Feb 02 10:59:29 crc kubenswrapper[4925]: I0202 10:59:29.664511 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:29 crc kubenswrapper[4925]: I0202 10:59:29.664612 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:29 crc kubenswrapper[4925]: I0202 10:59:29.664623 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:29 crc kubenswrapper[4925]: E0202 10:59:29.665791 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:59:29 crc kubenswrapper[4925]: E0202 10:59:29.665883 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:59:29 crc kubenswrapper[4925]: E0202 10:59:29.665800 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.646052 4925 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.663518 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.666606 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.668451 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.740783 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.741502 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.744977 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.745193 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.745685 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.745977 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-v7bdd"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.746112 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.746299 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.746325 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.746592 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.746810 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.747100 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.747753 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.758615 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.759110 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.759512 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.759614 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.760036 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.760106 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.760246 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.760496 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.760763 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.761245 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.761462 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.771019 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.771645 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.772914 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mpgcb"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.773427 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.773519 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.773701 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.774523 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.778341 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.780667 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.780901 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.780968 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.781092 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-qx9mv"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.781491 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.781585 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qx9mv" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.783971 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-68qpw"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.786190 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.786424 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.786583 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.786734 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.786862 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.787012 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.787579 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.788727 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.789005 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.789303 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.789564 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.793296 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.795530 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.801193 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.801566 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.801681 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.801927 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.803805 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.804599 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.807152 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hlntp"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.807774 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7gsrw"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.808024 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nzfrn"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.808148 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.808526 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nzfrn" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.808861 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.808860 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5cv2"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.814788 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b55lq"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.814971 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5cv2" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.815370 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.815679 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xkk7n"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.816061 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xkk7n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.816296 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.817462 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.817750 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.817927 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.818765 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-b55lq" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.819740 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-nzwbr"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.820461 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.821046 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cmf26"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.821819 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.822017 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.822447 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.823212 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.823411 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-56md8"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.824214 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6bfv4"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.832414 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.833283 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.837949 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6bfv4" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.840857 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t6w8r"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.841066 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzq6k\" (UniqueName: \"kubernetes.io/projected/670ef54d-fb71-49c9-930b-cae1088d828d-kube-api-access-mzq6k\") pod \"console-operator-58897d9998-b55lq\" (UID: \"670ef54d-fb71-49c9-930b-cae1088d828d\") " pod="openshift-console-operator/console-operator-58897d9998-b55lq" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.841128 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mpgcb\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.841165 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45405c2c-780c-4190-8cad-466ecfd84d2d-client-ca\") pod \"route-controller-manager-6576b87f9c-bf6lk\" (UID: \"45405c2c-780c-4190-8cad-466ecfd84d2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.841429 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da7ca31-35e0-47b3-a877-63d50ed68d70-config\") pod \"machine-api-operator-5694c8668f-cmf26\" (UID: \"5da7ca31-35e0-47b3-a877-63d50ed68d70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.842811 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvkv7\" (UniqueName: \"kubernetes.io/projected/45405c2c-780c-4190-8cad-466ecfd84d2d-kube-api-access-cvkv7\") pod \"route-controller-manager-6576b87f9c-bf6lk\" (UID: \"45405c2c-780c-4190-8cad-466ecfd84d2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.842848 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79nk8\" (UniqueName: \"kubernetes.io/projected/a284e563-4e19-4602-8475-282ed1c71e23-kube-api-access-79nk8\") pod \"cluster-samples-operator-665b6dd947-nzfrn\" (UID: \"a284e563-4e19-4602-8475-282ed1c71e23\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nzfrn" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.842874 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.842900 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-config\") pod \"controller-manager-879f6c89f-mpgcb\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.842918 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqjjw\" (UniqueName: \"kubernetes.io/projected/7babde6f-c3db-4e20-9871-e2b8da06c334-kube-api-access-sqjjw\") pod \"cluster-image-registry-operator-dc59b4c8b-9nlsz\" (UID: \"7babde6f-c3db-4e20-9871-e2b8da06c334\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.842939 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.842992 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0896c0b8-88f0-41d3-a630-9098a1bf6be7-config\") pod \"authentication-operator-69f744f599-v7bdd\" (UID: \"0896c0b8-88f0-41d3-a630-9098a1bf6be7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.843014 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/628e72ad-1a83-4e42-a5bd-3ab0c710993e-image-import-ca\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.843033 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/628e72ad-1a83-4e42-a5bd-3ab0c710993e-serving-cert\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.847497 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854286 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854380 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0542f8d3-1555-4a7c-9c54-e3c075841559-etcd-client\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854427 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0896c0b8-88f0-41d3-a630-9098a1bf6be7-service-ca-bundle\") pod \"authentication-operator-69f744f599-v7bdd\" (UID: \"0896c0b8-88f0-41d3-a630-9098a1bf6be7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854449 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4zmb\" (UniqueName: \"kubernetes.io/projected/0896c0b8-88f0-41d3-a630-9098a1bf6be7-kube-api-access-n4zmb\") pod \"authentication-operator-69f744f599-v7bdd\" (UID: \"0896c0b8-88f0-41d3-a630-9098a1bf6be7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854511 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-client-ca\") pod \"controller-manager-879f6c89f-mpgcb\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854545 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/670ef54d-fb71-49c9-930b-cae1088d828d-serving-cert\") pod \"console-operator-58897d9998-b55lq\" (UID: \"670ef54d-fb71-49c9-930b-cae1088d828d\") " pod="openshift-console-operator/console-operator-58897d9998-b55lq" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854592 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9fch\" (UniqueName: \"kubernetes.io/projected/686d6cf9-761e-4394-ab8c-316841705a26-kube-api-access-c9fch\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854646 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e9fea0fb-a1ca-4ab8-a1fc-92673a76105e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hlntp\" (UID: \"e9fea0fb-a1ca-4ab8-a1fc-92673a76105e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854677 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/628e72ad-1a83-4e42-a5bd-3ab0c710993e-config\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854714 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e04827b9-eb5b-4326-a0d3-297e3fec4eef-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6bfv4\" (UID: \"e04827b9-eb5b-4326-a0d3-297e3fec4eef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6bfv4" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854734 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8af1feeb-ac81-4603-9e99-c06de71038f0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xkk7n\" (UID: \"8af1feeb-ac81-4603-9e99-c06de71038f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xkk7n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854758 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/628e72ad-1a83-4e42-a5bd-3ab0c710993e-etcd-client\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854784 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e04827b9-eb5b-4326-a0d3-297e3fec4eef-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6bfv4\" (UID: \"e04827b9-eb5b-4326-a0d3-297e3fec4eef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6bfv4" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854816 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0896c0b8-88f0-41d3-a630-9098a1bf6be7-serving-cert\") pod \"authentication-operator-69f744f599-v7bdd\" (UID: \"0896c0b8-88f0-41d3-a630-9098a1bf6be7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854837 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45405c2c-780c-4190-8cad-466ecfd84d2d-config\") pod \"route-controller-manager-6576b87f9c-bf6lk\" (UID: \"45405c2c-780c-4190-8cad-466ecfd84d2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854861 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-config\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854881 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/628e72ad-1a83-4e42-a5bd-3ab0c710993e-encryption-config\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854902 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0896c0b8-88f0-41d3-a630-9098a1bf6be7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-v7bdd\" (UID: \"0896c0b8-88f0-41d3-a630-9098a1bf6be7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854935 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7babde6f-c3db-4e20-9871-e2b8da06c334-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9nlsz\" (UID: \"7babde6f-c3db-4e20-9871-e2b8da06c334\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.854967 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxctm\" (UniqueName: \"kubernetes.io/projected/dc71a086-3e99-48d7-99d8-5a08c0425e16-kube-api-access-rxctm\") pod \"openshift-apiserver-operator-796bbdcf4f-j5cv2\" (UID: \"dc71a086-3e99-48d7-99d8-5a08c0425e16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5cv2" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.855009 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2m5l\" (UniqueName: \"kubernetes.io/projected/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-kube-api-access-n2m5l\") pod \"controller-manager-879f6c89f-mpgcb\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.855040 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7babde6f-c3db-4e20-9871-e2b8da06c334-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9nlsz\" (UID: \"7babde6f-c3db-4e20-9871-e2b8da06c334\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.855095 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0542f8d3-1555-4a7c-9c54-e3c075841559-audit-policies\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.855118 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-trusted-ca-bundle\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.855136 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0542f8d3-1555-4a7c-9c54-e3c075841559-encryption-config\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.855154 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0542f8d3-1555-4a7c-9c54-e3c075841559-audit-dir\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.855172 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.855189 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-serving-cert\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.856452 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t6w8r" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.857357 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.857929 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.858651 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.859948 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.860253 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.860487 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.860657 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.860795 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.860881 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.860963 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.861045 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.861144 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.861252 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.861333 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.861396 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.861429 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.861515 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.861596 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.861832 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.861936 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.862094 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.862116 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.862247 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.862366 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.862510 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.862525 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.862564 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.862644 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.862722 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d7hhc"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.862876 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.863101 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.863215 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.863296 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.860929 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.861606 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.862407 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.864141 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.864235 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.863303 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.862440 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.864361 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.861651 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.861666 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.865406 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.865771 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5da7ca31-35e0-47b3-a877-63d50ed68d70-images\") pod \"machine-api-operator-5694c8668f-cmf26\" (UID: \"5da7ca31-35e0-47b3-a877-63d50ed68d70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.865831 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.865863 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/670ef54d-fb71-49c9-930b-cae1088d828d-config\") pod \"console-operator-58897d9998-b55lq\" (UID: \"670ef54d-fb71-49c9-930b-cae1088d828d\") " pod="openshift-console-operator/console-operator-58897d9998-b55lq" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.865956 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7babde6f-c3db-4e20-9871-e2b8da06c334-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9nlsz\" (UID: \"7babde6f-c3db-4e20-9871-e2b8da06c334\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.865986 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e04827b9-eb5b-4326-a0d3-297e3fec4eef-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6bfv4\" (UID: \"e04827b9-eb5b-4326-a0d3-297e3fec4eef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6bfv4" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866015 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hfs6\" (UniqueName: \"kubernetes.io/projected/e9fea0fb-a1ca-4ab8-a1fc-92673a76105e-kube-api-access-4hfs6\") pod \"openshift-config-operator-7777fb866f-hlntp\" (UID: \"e9fea0fb-a1ca-4ab8-a1fc-92673a76105e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866087 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2c4d\" (UniqueName: \"kubernetes.io/projected/8af1feeb-ac81-4603-9e99-c06de71038f0-kube-api-access-v2c4d\") pod \"openshift-controller-manager-operator-756b6f6bc6-xkk7n\" (UID: \"8af1feeb-ac81-4603-9e99-c06de71038f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xkk7n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866126 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a284e563-4e19-4602-8475-282ed1c71e23-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nzfrn\" (UID: \"a284e563-4e19-4602-8475-282ed1c71e23\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nzfrn" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866144 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5da7ca31-35e0-47b3-a877-63d50ed68d70-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cmf26\" (UID: \"5da7ca31-35e0-47b3-a877-63d50ed68d70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866167 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc71a086-3e99-48d7-99d8-5a08c0425e16-config\") pod \"openshift-apiserver-operator-796bbdcf4f-j5cv2\" (UID: \"dc71a086-3e99-48d7-99d8-5a08c0425e16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5cv2" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866186 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-serving-cert\") pod \"controller-manager-879f6c89f-mpgcb\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866205 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/628e72ad-1a83-4e42-a5bd-3ab0c710993e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866222 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/628e72ad-1a83-4e42-a5bd-3ab0c710993e-audit-dir\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866239 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-oauth-config\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866272 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/686d6cf9-761e-4394-ab8c-316841705a26-audit-dir\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866291 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tdvx\" (UniqueName: \"kubernetes.io/projected/81af45ef-2049-4155-9c0b-ae722e6b8c8a-kube-api-access-9tdvx\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866308 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0542f8d3-1555-4a7c-9c54-e3c075841559-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866327 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk9g5\" (UniqueName: \"kubernetes.io/projected/0542f8d3-1555-4a7c-9c54-e3c075841559-kube-api-access-jk9g5\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866344 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-audit-policies\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866362 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-oauth-serving-cert\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866385 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9fea0fb-a1ca-4ab8-a1fc-92673a76105e-serving-cert\") pod \"openshift-config-operator-7777fb866f-hlntp\" (UID: \"e9fea0fb-a1ca-4ab8-a1fc-92673a76105e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866406 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc71a086-3e99-48d7-99d8-5a08c0425e16-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-j5cv2\" (UID: \"dc71a086-3e99-48d7-99d8-5a08c0425e16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5cv2" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866423 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/628e72ad-1a83-4e42-a5bd-3ab0c710993e-node-pullsecrets\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866439 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866463 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866481 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/670ef54d-fb71-49c9-930b-cae1088d828d-trusted-ca\") pod \"console-operator-58897d9998-b55lq\" (UID: \"670ef54d-fb71-49c9-930b-cae1088d828d\") " pod="openshift-console-operator/console-operator-58897d9998-b55lq" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866500 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltxkv\" (UniqueName: \"kubernetes.io/projected/628e72ad-1a83-4e42-a5bd-3ab0c710993e-kube-api-access-ltxkv\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866517 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0542f8d3-1555-4a7c-9c54-e3c075841559-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866540 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwnhw\" (UniqueName: \"kubernetes.io/projected/2c1d6c8a-41c7-48a0-853c-d1df60efb422-kube-api-access-kwnhw\") pod \"downloads-7954f5f757-qx9mv\" (UID: \"2c1d6c8a-41c7-48a0-853c-d1df60efb422\") " pod="openshift-console/downloads-7954f5f757-qx9mv" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866556 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/628e72ad-1a83-4e42-a5bd-3ab0c710993e-etcd-serving-ca\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866575 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45405c2c-780c-4190-8cad-466ecfd84d2d-serving-cert\") pod \"route-controller-manager-6576b87f9c-bf6lk\" (UID: \"45405c2c-780c-4190-8cad-466ecfd84d2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866596 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8af1feeb-ac81-4603-9e99-c06de71038f0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xkk7n\" (UID: \"8af1feeb-ac81-4603-9e99-c06de71038f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xkk7n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866611 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0542f8d3-1555-4a7c-9c54-e3c075841559-serving-cert\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866632 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/628e72ad-1a83-4e42-a5bd-3ab0c710993e-audit\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866656 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9znjx\" (UniqueName: \"kubernetes.io/projected/5da7ca31-35e0-47b3-a877-63d50ed68d70-kube-api-access-9znjx\") pod \"machine-api-operator-5694c8668f-cmf26\" (UID: \"5da7ca31-35e0-47b3-a877-63d50ed68d70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866673 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866695 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866697 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w4klh"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.867247 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w4klh" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.868438 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.870122 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.866710 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.870805 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-service-ca\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.873608 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.873980 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.874109 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.874246 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.875203 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.877169 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-v7bdd"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.880965 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bxgj"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.881752 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bxgj" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.881976 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s4vpj"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.882605 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s4vpj" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.888578 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.892405 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.909730 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.910316 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.910221 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.913315 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.913759 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.914270 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.914360 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.921588 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-q87qm"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.922313 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.922819 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qx9mv"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.922917 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.923235 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.926508 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.942067 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zbmjc"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.948368 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.948970 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zbmjc" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.950472 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.950963 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.965719 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.966637 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bj99s"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971340 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/628e72ad-1a83-4e42-a5bd-3ab0c710993e-node-pullsecrets\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971371 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971392 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc71a086-3e99-48d7-99d8-5a08c0425e16-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-j5cv2\" (UID: \"dc71a086-3e99-48d7-99d8-5a08c0425e16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5cv2" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971407 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971422 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/670ef54d-fb71-49c9-930b-cae1088d828d-trusted-ca\") pod \"console-operator-58897d9998-b55lq\" (UID: \"670ef54d-fb71-49c9-930b-cae1088d828d\") " pod="openshift-console-operator/console-operator-58897d9998-b55lq" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971441 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwnhw\" (UniqueName: \"kubernetes.io/projected/2c1d6c8a-41c7-48a0-853c-d1df60efb422-kube-api-access-kwnhw\") pod \"downloads-7954f5f757-qx9mv\" (UID: \"2c1d6c8a-41c7-48a0-853c-d1df60efb422\") " pod="openshift-console/downloads-7954f5f757-qx9mv" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971454 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/628e72ad-1a83-4e42-a5bd-3ab0c710993e-etcd-serving-ca\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971467 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltxkv\" (UniqueName: \"kubernetes.io/projected/628e72ad-1a83-4e42-a5bd-3ab0c710993e-kube-api-access-ltxkv\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971481 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0542f8d3-1555-4a7c-9c54-e3c075841559-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971501 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8af1feeb-ac81-4603-9e99-c06de71038f0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xkk7n\" (UID: \"8af1feeb-ac81-4603-9e99-c06de71038f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xkk7n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971515 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0542f8d3-1555-4a7c-9c54-e3c075841559-serving-cert\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971531 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45405c2c-780c-4190-8cad-466ecfd84d2d-serving-cert\") pod \"route-controller-manager-6576b87f9c-bf6lk\" (UID: \"45405c2c-780c-4190-8cad-466ecfd84d2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971548 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9znjx\" (UniqueName: \"kubernetes.io/projected/5da7ca31-35e0-47b3-a877-63d50ed68d70-kube-api-access-9znjx\") pod \"machine-api-operator-5694c8668f-cmf26\" (UID: \"5da7ca31-35e0-47b3-a877-63d50ed68d70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971565 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971588 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/628e72ad-1a83-4e42-a5bd-3ab0c710993e-audit\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971606 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971620 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971634 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-service-ca\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971651 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzq6k\" (UniqueName: \"kubernetes.io/projected/670ef54d-fb71-49c9-930b-cae1088d828d-kube-api-access-mzq6k\") pod \"console-operator-58897d9998-b55lq\" (UID: \"670ef54d-fb71-49c9-930b-cae1088d828d\") " pod="openshift-console-operator/console-operator-58897d9998-b55lq" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971665 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mpgcb\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971680 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45405c2c-780c-4190-8cad-466ecfd84d2d-client-ca\") pod \"route-controller-manager-6576b87f9c-bf6lk\" (UID: \"45405c2c-780c-4190-8cad-466ecfd84d2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971694 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da7ca31-35e0-47b3-a877-63d50ed68d70-config\") pod \"machine-api-operator-5694c8668f-cmf26\" (UID: \"5da7ca31-35e0-47b3-a877-63d50ed68d70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971709 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvkv7\" (UniqueName: \"kubernetes.io/projected/45405c2c-780c-4190-8cad-466ecfd84d2d-kube-api-access-cvkv7\") pod \"route-controller-manager-6576b87f9c-bf6lk\" (UID: \"45405c2c-780c-4190-8cad-466ecfd84d2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971738 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79nk8\" (UniqueName: \"kubernetes.io/projected/a284e563-4e19-4602-8475-282ed1c71e23-kube-api-access-79nk8\") pod \"cluster-samples-operator-665b6dd947-nzfrn\" (UID: \"a284e563-4e19-4602-8475-282ed1c71e23\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nzfrn" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971754 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971787 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqjjw\" (UniqueName: \"kubernetes.io/projected/7babde6f-c3db-4e20-9871-e2b8da06c334-kube-api-access-sqjjw\") pod \"cluster-image-registry-operator-dc59b4c8b-9nlsz\" (UID: \"7babde6f-c3db-4e20-9871-e2b8da06c334\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971805 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971821 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-config\") pod \"controller-manager-879f6c89f-mpgcb\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971836 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0896c0b8-88f0-41d3-a630-9098a1bf6be7-config\") pod \"authentication-operator-69f744f599-v7bdd\" (UID: \"0896c0b8-88f0-41d3-a630-9098a1bf6be7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971851 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/628e72ad-1a83-4e42-a5bd-3ab0c710993e-image-import-ca\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971867 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/628e72ad-1a83-4e42-a5bd-3ab0c710993e-serving-cert\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971882 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971899 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971916 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0542f8d3-1555-4a7c-9c54-e3c075841559-etcd-client\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971930 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0896c0b8-88f0-41d3-a630-9098a1bf6be7-service-ca-bundle\") pod \"authentication-operator-69f744f599-v7bdd\" (UID: \"0896c0b8-88f0-41d3-a630-9098a1bf6be7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971946 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4zmb\" (UniqueName: \"kubernetes.io/projected/0896c0b8-88f0-41d3-a630-9098a1bf6be7-kube-api-access-n4zmb\") pod \"authentication-operator-69f744f599-v7bdd\" (UID: \"0896c0b8-88f0-41d3-a630-9098a1bf6be7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971963 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-client-ca\") pod \"controller-manager-879f6c89f-mpgcb\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971979 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/670ef54d-fb71-49c9-930b-cae1088d828d-serving-cert\") pod \"console-operator-58897d9998-b55lq\" (UID: \"670ef54d-fb71-49c9-930b-cae1088d828d\") " pod="openshift-console-operator/console-operator-58897d9998-b55lq" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.971994 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9fch\" (UniqueName: \"kubernetes.io/projected/686d6cf9-761e-4394-ab8c-316841705a26-kube-api-access-c9fch\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972010 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e9fea0fb-a1ca-4ab8-a1fc-92673a76105e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hlntp\" (UID: \"e9fea0fb-a1ca-4ab8-a1fc-92673a76105e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972024 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/628e72ad-1a83-4e42-a5bd-3ab0c710993e-config\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972041 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e04827b9-eb5b-4326-a0d3-297e3fec4eef-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6bfv4\" (UID: \"e04827b9-eb5b-4326-a0d3-297e3fec4eef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6bfv4" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972059 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/628e72ad-1a83-4e42-a5bd-3ab0c710993e-etcd-client\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972097 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e04827b9-eb5b-4326-a0d3-297e3fec4eef-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6bfv4\" (UID: \"e04827b9-eb5b-4326-a0d3-297e3fec4eef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6bfv4" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972119 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8af1feeb-ac81-4603-9e99-c06de71038f0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xkk7n\" (UID: \"8af1feeb-ac81-4603-9e99-c06de71038f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xkk7n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972135 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0896c0b8-88f0-41d3-a630-9098a1bf6be7-serving-cert\") pod \"authentication-operator-69f744f599-v7bdd\" (UID: \"0896c0b8-88f0-41d3-a630-9098a1bf6be7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972150 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45405c2c-780c-4190-8cad-466ecfd84d2d-config\") pod \"route-controller-manager-6576b87f9c-bf6lk\" (UID: \"45405c2c-780c-4190-8cad-466ecfd84d2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972167 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-config\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972183 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/628e72ad-1a83-4e42-a5bd-3ab0c710993e-encryption-config\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972197 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0896c0b8-88f0-41d3-a630-9098a1bf6be7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-v7bdd\" (UID: \"0896c0b8-88f0-41d3-a630-9098a1bf6be7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972213 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7babde6f-c3db-4e20-9871-e2b8da06c334-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9nlsz\" (UID: \"7babde6f-c3db-4e20-9871-e2b8da06c334\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972228 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7babde6f-c3db-4e20-9871-e2b8da06c334-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9nlsz\" (UID: \"7babde6f-c3db-4e20-9871-e2b8da06c334\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972244 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0542f8d3-1555-4a7c-9c54-e3c075841559-audit-policies\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972261 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxctm\" (UniqueName: \"kubernetes.io/projected/dc71a086-3e99-48d7-99d8-5a08c0425e16-kube-api-access-rxctm\") pod \"openshift-apiserver-operator-796bbdcf4f-j5cv2\" (UID: \"dc71a086-3e99-48d7-99d8-5a08c0425e16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5cv2" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972277 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2m5l\" (UniqueName: \"kubernetes.io/projected/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-kube-api-access-n2m5l\") pod \"controller-manager-879f6c89f-mpgcb\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972291 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0542f8d3-1555-4a7c-9c54-e3c075841559-encryption-config\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972306 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0542f8d3-1555-4a7c-9c54-e3c075841559-audit-dir\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972343 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972359 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-serving-cert\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972374 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-trusted-ca-bundle\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972388 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/670ef54d-fb71-49c9-930b-cae1088d828d-config\") pod \"console-operator-58897d9998-b55lq\" (UID: \"670ef54d-fb71-49c9-930b-cae1088d828d\") " pod="openshift-console-operator/console-operator-58897d9998-b55lq" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972420 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5da7ca31-35e0-47b3-a877-63d50ed68d70-images\") pod \"machine-api-operator-5694c8668f-cmf26\" (UID: \"5da7ca31-35e0-47b3-a877-63d50ed68d70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972435 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972457 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e04827b9-eb5b-4326-a0d3-297e3fec4eef-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6bfv4\" (UID: \"e04827b9-eb5b-4326-a0d3-297e3fec4eef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6bfv4" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972493 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7babde6f-c3db-4e20-9871-e2b8da06c334-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9nlsz\" (UID: \"7babde6f-c3db-4e20-9871-e2b8da06c334\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972509 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hfs6\" (UniqueName: \"kubernetes.io/projected/e9fea0fb-a1ca-4ab8-a1fc-92673a76105e-kube-api-access-4hfs6\") pod \"openshift-config-operator-7777fb866f-hlntp\" (UID: \"e9fea0fb-a1ca-4ab8-a1fc-92673a76105e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972525 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2c4d\" (UniqueName: \"kubernetes.io/projected/8af1feeb-ac81-4603-9e99-c06de71038f0-kube-api-access-v2c4d\") pod \"openshift-controller-manager-operator-756b6f6bc6-xkk7n\" (UID: \"8af1feeb-ac81-4603-9e99-c06de71038f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xkk7n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972546 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a284e563-4e19-4602-8475-282ed1c71e23-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nzfrn\" (UID: \"a284e563-4e19-4602-8475-282ed1c71e23\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nzfrn" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972561 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc71a086-3e99-48d7-99d8-5a08c0425e16-config\") pod \"openshift-apiserver-operator-796bbdcf4f-j5cv2\" (UID: \"dc71a086-3e99-48d7-99d8-5a08c0425e16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5cv2" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972575 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5da7ca31-35e0-47b3-a877-63d50ed68d70-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cmf26\" (UID: \"5da7ca31-35e0-47b3-a877-63d50ed68d70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972589 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/628e72ad-1a83-4e42-a5bd-3ab0c710993e-audit-dir\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972603 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-oauth-config\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972620 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-serving-cert\") pod \"controller-manager-879f6c89f-mpgcb\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972633 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/628e72ad-1a83-4e42-a5bd-3ab0c710993e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972657 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/686d6cf9-761e-4394-ab8c-316841705a26-audit-dir\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972679 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tdvx\" (UniqueName: \"kubernetes.io/projected/81af45ef-2049-4155-9c0b-ae722e6b8c8a-kube-api-access-9tdvx\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972695 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-audit-policies\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972711 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-oauth-serving-cert\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972726 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9fea0fb-a1ca-4ab8-a1fc-92673a76105e-serving-cert\") pod \"openshift-config-operator-7777fb866f-hlntp\" (UID: \"e9fea0fb-a1ca-4ab8-a1fc-92673a76105e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972741 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0542f8d3-1555-4a7c-9c54-e3c075841559-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.972756 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk9g5\" (UniqueName: \"kubernetes.io/projected/0542f8d3-1555-4a7c-9c54-e3c075841559-kube-api-access-jk9g5\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.973288 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e9fea0fb-a1ca-4ab8-a1fc-92673a76105e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hlntp\" (UID: \"e9fea0fb-a1ca-4ab8-a1fc-92673a76105e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.973368 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/628e72ad-1a83-4e42-a5bd-3ab0c710993e-node-pullsecrets\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.973457 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/628e72ad-1a83-4e42-a5bd-3ab0c710993e-config\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.975541 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45405c2c-780c-4190-8cad-466ecfd84d2d-client-ca\") pod \"route-controller-manager-6576b87f9c-bf6lk\" (UID: \"45405c2c-780c-4190-8cad-466ecfd84d2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.975552 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghpq7"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.975617 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bj99s" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.975868 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0542f8d3-1555-4a7c-9c54-e3c075841559-audit-dir\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.976580 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0542f8d3-1555-4a7c-9c54-e3c075841559-audit-policies\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.978403 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-config\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.978477 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da7ca31-35e0-47b3-a877-63d50ed68d70-config\") pod \"machine-api-operator-5694c8668f-cmf26\" (UID: \"5da7ca31-35e0-47b3-a877-63d50ed68d70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.979046 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-sch2v"] Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.980492 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8af1feeb-ac81-4603-9e99-c06de71038f0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xkk7n\" (UID: \"8af1feeb-ac81-4603-9e99-c06de71038f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xkk7n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.980857 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5da7ca31-35e0-47b3-a877-63d50ed68d70-images\") pod \"machine-api-operator-5694c8668f-cmf26\" (UID: \"5da7ca31-35e0-47b3-a877-63d50ed68d70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.981510 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45405c2c-780c-4190-8cad-466ecfd84d2d-config\") pod \"route-controller-manager-6576b87f9c-bf6lk\" (UID: \"45405c2c-780c-4190-8cad-466ecfd84d2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.982022 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.982296 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0896c0b8-88f0-41d3-a630-9098a1bf6be7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-v7bdd\" (UID: \"0896c0b8-88f0-41d3-a630-9098a1bf6be7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.982479 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/628e72ad-1a83-4e42-a5bd-3ab0c710993e-audit\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.982786 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7babde6f-c3db-4e20-9871-e2b8da06c334-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9nlsz\" (UID: \"7babde6f-c3db-4e20-9871-e2b8da06c334\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.983895 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.984491 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc71a086-3e99-48d7-99d8-5a08c0425e16-config\") pod \"openshift-apiserver-operator-796bbdcf4f-j5cv2\" (UID: \"dc71a086-3e99-48d7-99d8-5a08c0425e16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5cv2" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.984544 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0896c0b8-88f0-41d3-a630-9098a1bf6be7-service-ca-bundle\") pod \"authentication-operator-69f744f599-v7bdd\" (UID: \"0896c0b8-88f0-41d3-a630-9098a1bf6be7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.985487 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45405c2c-780c-4190-8cad-466ecfd84d2d-serving-cert\") pod \"route-controller-manager-6576b87f9c-bf6lk\" (UID: \"45405c2c-780c-4190-8cad-466ecfd84d2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.985651 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.986375 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.986860 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/628e72ad-1a83-4e42-a5bd-3ab0c710993e-audit-dir\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.987940 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/628e72ad-1a83-4e42-a5bd-3ab0c710993e-etcd-serving-ca\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.989443 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-service-ca\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.990009 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-oauth-serving-cert\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.990413 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.991252 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0542f8d3-1555-4a7c-9c54-e3c075841559-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.991697 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-oauth-config\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.992546 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mpgcb\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.992694 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/670ef54d-fb71-49c9-930b-cae1088d828d-serving-cert\") pod \"console-operator-58897d9998-b55lq\" (UID: \"670ef54d-fb71-49c9-930b-cae1088d828d\") " pod="openshift-console-operator/console-operator-58897d9998-b55lq" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.992895 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-serving-cert\") pod \"controller-manager-879f6c89f-mpgcb\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.993850 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-config\") pod \"controller-manager-879f6c89f-mpgcb\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.993970 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/628e72ad-1a83-4e42-a5bd-3ab0c710993e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.994568 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8af1feeb-ac81-4603-9e99-c06de71038f0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xkk7n\" (UID: \"8af1feeb-ac81-4603-9e99-c06de71038f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xkk7n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.994706 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-trusted-ca-bundle\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.994749 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/686d6cf9-761e-4394-ab8c-316841705a26-audit-dir\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.995621 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0542f8d3-1555-4a7c-9c54-e3c075841559-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.995836 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghpq7" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.996279 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0896c0b8-88f0-41d3-a630-9098a1bf6be7-config\") pod \"authentication-operator-69f744f599-v7bdd\" (UID: \"0896c0b8-88f0-41d3-a630-9098a1bf6be7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.996293 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/628e72ad-1a83-4e42-a5bd-3ab0c710993e-image-import-ca\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.996791 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/670ef54d-fb71-49c9-930b-cae1088d828d-config\") pod \"console-operator-58897d9998-b55lq\" (UID: \"670ef54d-fb71-49c9-930b-cae1088d828d\") " pod="openshift-console-operator/console-operator-58897d9998-b55lq" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.996960 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.997270 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-audit-policies\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.998005 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9fea0fb-a1ca-4ab8-a1fc-92673a76105e-serving-cert\") pod \"openshift-config-operator-7777fb866f-hlntp\" (UID: \"e9fea0fb-a1ca-4ab8-a1fc-92673a76105e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.998341 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0542f8d3-1555-4a7c-9c54-e3c075841559-serving-cert\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.998673 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.999464 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/670ef54d-fb71-49c9-930b-cae1088d828d-trusted-ca\") pod \"console-operator-58897d9998-b55lq\" (UID: \"670ef54d-fb71-49c9-930b-cae1088d828d\") " pod="openshift-console-operator/console-operator-58897d9998-b55lq" Feb 02 10:59:30 crc kubenswrapper[4925]: I0202 10:59:30.999795 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-client-ca\") pod \"controller-manager-879f6c89f-mpgcb\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.000552 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-serving-cert\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.000930 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0896c0b8-88f0-41d3-a630-9098a1bf6be7-serving-cert\") pod \"authentication-operator-69f744f599-v7bdd\" (UID: \"0896c0b8-88f0-41d3-a630-9098a1bf6be7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.001033 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.001246 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sch2v" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.001953 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.002167 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc71a086-3e99-48d7-99d8-5a08c0425e16-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-j5cv2\" (UID: \"dc71a086-3e99-48d7-99d8-5a08c0425e16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5cv2" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.002285 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0542f8d3-1555-4a7c-9c54-e3c075841559-encryption-config\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.002297 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7babde6f-c3db-4e20-9871-e2b8da06c334-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9nlsz\" (UID: \"7babde6f-c3db-4e20-9871-e2b8da06c334\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.002503 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.002793 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/628e72ad-1a83-4e42-a5bd-3ab0c710993e-serving-cert\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.003043 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/628e72ad-1a83-4e42-a5bd-3ab0c710993e-etcd-client\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.003354 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/628e72ad-1a83-4e42-a5bd-3ab0c710993e-encryption-config\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.003547 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.003604 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a284e563-4e19-4602-8475-282ed1c71e23-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nzfrn\" (UID: \"a284e563-4e19-4602-8475-282ed1c71e23\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nzfrn" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.004434 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.004646 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5da7ca31-35e0-47b3-a877-63d50ed68d70-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cmf26\" (UID: \"5da7ca31-35e0-47b3-a877-63d50ed68d70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.004716 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.005335 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cbbdl"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.005498 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.005934 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kn82k"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.006051 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0542f8d3-1555-4a7c-9c54-e3c075841559-etcd-client\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.006217 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbbdl" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.006306 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fsn6h"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.006576 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.006626 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.006946 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-68qpw"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.007186 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.007324 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fsn6h" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.007893 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5cv2"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.009697 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-5wtgq"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.010148 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5wtgq" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.010933 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.011926 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hlntp"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.013115 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b55lq"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.015504 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7gsrw"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.016744 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nzfrn"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.017918 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6bfv4"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.019179 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zbmjc"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.020397 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s4vpj"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.021562 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xkk7n"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.023030 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cmf26"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.023956 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.025438 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.026453 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bxgj"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.027475 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.028535 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghpq7"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.029615 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.030977 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.033144 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t6w8r"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.034377 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cbbdl"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.035552 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d7hhc"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.036692 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bj99s"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.037805 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fsn6h"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.039167 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.040219 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-l76bs"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.040834 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-l76bs" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.041429 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jhw58"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.044574 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.046715 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w4klh"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.046846 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.050167 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kn82k"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.054589 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-56md8"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.055502 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-sch2v"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.057554 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-nzwbr"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.060503 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.062120 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.064713 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.066658 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mpgcb"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.068763 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-l76bs"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.070111 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.071576 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jhw58"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.072681 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-nnqww"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.073976 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nnqww"] Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.074140 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nnqww" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.084186 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.093879 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e04827b9-eb5b-4326-a0d3-297e3fec4eef-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6bfv4\" (UID: \"e04827b9-eb5b-4326-a0d3-297e3fec4eef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6bfv4" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.103427 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.104178 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e04827b9-eb5b-4326-a0d3-297e3fec4eef-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6bfv4\" (UID: \"e04827b9-eb5b-4326-a0d3-297e3fec4eef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6bfv4" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.123850 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.143825 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.164167 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.183777 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.203604 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.224313 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.252095 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.264023 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.284610 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.324166 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.344615 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.363845 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.384832 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.403715 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.424288 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.444266 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.463766 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.484379 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.504253 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.524049 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.544338 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.564259 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.584331 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.605260 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.623380 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.644931 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.663998 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.664009 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.664044 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.665046 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.684540 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.704172 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.724727 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.744560 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.765120 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.784865 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.839342 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.840130 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.843410 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.865453 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.885324 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.905273 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.924052 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.942473 4925 request.go:700] Waited for 1.018924436s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.944590 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.963431 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 10:59:31 crc kubenswrapper[4925]: I0202 10:59:31.984946 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.004425 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.024124 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.043735 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.064591 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.084129 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.105818 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.123683 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.179801 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk9g5\" (UniqueName: \"kubernetes.io/projected/0542f8d3-1555-4a7c-9c54-e3c075841559-kube-api-access-jk9g5\") pod \"apiserver-7bbb656c7d-c7w8n\" (UID: \"0542f8d3-1555-4a7c-9c54-e3c075841559\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.184867 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.190995 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxctm\" (UniqueName: \"kubernetes.io/projected/dc71a086-3e99-48d7-99d8-5a08c0425e16-kube-api-access-rxctm\") pod \"openshift-apiserver-operator-796bbdcf4f-j5cv2\" (UID: \"dc71a086-3e99-48d7-99d8-5a08c0425e16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5cv2" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.233864 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2m5l\" (UniqueName: \"kubernetes.io/projected/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-kube-api-access-n2m5l\") pod \"controller-manager-879f6c89f-mpgcb\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.248832 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvkv7\" (UniqueName: \"kubernetes.io/projected/45405c2c-780c-4190-8cad-466ecfd84d2d-kube-api-access-cvkv7\") pod \"route-controller-manager-6576b87f9c-bf6lk\" (UID: \"45405c2c-780c-4190-8cad-466ecfd84d2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.255580 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.273639 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79nk8\" (UniqueName: \"kubernetes.io/projected/a284e563-4e19-4602-8475-282ed1c71e23-kube-api-access-79nk8\") pod \"cluster-samples-operator-665b6dd947-nzfrn\" (UID: \"a284e563-4e19-4602-8475-282ed1c71e23\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nzfrn" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.294770 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9znjx\" (UniqueName: \"kubernetes.io/projected/5da7ca31-35e0-47b3-a877-63d50ed68d70-kube-api-access-9znjx\") pod \"machine-api-operator-5694c8668f-cmf26\" (UID: \"5da7ca31-35e0-47b3-a877-63d50ed68d70\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.304398 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.309554 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e04827b9-eb5b-4326-a0d3-297e3fec4eef-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6bfv4\" (UID: \"e04827b9-eb5b-4326-a0d3-297e3fec4eef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6bfv4" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.317537 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.334140 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7babde6f-c3db-4e20-9871-e2b8da06c334-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9nlsz\" (UID: \"7babde6f-c3db-4e20-9871-e2b8da06c334\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.344688 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hfs6\" (UniqueName: \"kubernetes.io/projected/e9fea0fb-a1ca-4ab8-a1fc-92673a76105e-kube-api-access-4hfs6\") pod \"openshift-config-operator-7777fb866f-hlntp\" (UID: \"e9fea0fb-a1ca-4ab8-a1fc-92673a76105e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.369649 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2c4d\" (UniqueName: \"kubernetes.io/projected/8af1feeb-ac81-4603-9e99-c06de71038f0-kube-api-access-v2c4d\") pod \"openshift-controller-manager-operator-756b6f6bc6-xkk7n\" (UID: \"8af1feeb-ac81-4603-9e99-c06de71038f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xkk7n" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.397455 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4zmb\" (UniqueName: \"kubernetes.io/projected/0896c0b8-88f0-41d3-a630-9098a1bf6be7-kube-api-access-n4zmb\") pod \"authentication-operator-69f744f599-v7bdd\" (UID: \"0896c0b8-88f0-41d3-a630-9098a1bf6be7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.405474 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltxkv\" (UniqueName: \"kubernetes.io/projected/628e72ad-1a83-4e42-a5bd-3ab0c710993e-kube-api-access-ltxkv\") pod \"apiserver-76f77b778f-68qpw\" (UID: \"628e72ad-1a83-4e42-a5bd-3ab0c710993e\") " pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.420436 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.433338 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwnhw\" (UniqueName: \"kubernetes.io/projected/2c1d6c8a-41c7-48a0-853c-d1df60efb422-kube-api-access-kwnhw\") pod \"downloads-7954f5f757-qx9mv\" (UID: \"2c1d6c8a-41c7-48a0-853c-d1df60efb422\") " pod="openshift-console/downloads-7954f5f757-qx9mv" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.433532 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.443538 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tdvx\" (UniqueName: \"kubernetes.io/projected/81af45ef-2049-4155-9c0b-ae722e6b8c8a-kube-api-access-9tdvx\") pod \"console-f9d7485db-nzwbr\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.449659 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nzfrn" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.463417 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzq6k\" (UniqueName: \"kubernetes.io/projected/670ef54d-fb71-49c9-930b-cae1088d828d-kube-api-access-mzq6k\") pod \"console-operator-58897d9998-b55lq\" (UID: \"670ef54d-fb71-49c9-930b-cae1088d828d\") " pod="openshift-console-operator/console-operator-58897d9998-b55lq" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.479445 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5cv2" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.487904 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqjjw\" (UniqueName: \"kubernetes.io/projected/7babde6f-c3db-4e20-9871-e2b8da06c334-kube-api-access-sqjjw\") pod \"cluster-image-registry-operator-dc59b4c8b-9nlsz\" (UID: \"7babde6f-c3db-4e20-9871-e2b8da06c334\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.504457 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.507275 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9fch\" (UniqueName: \"kubernetes.io/projected/686d6cf9-761e-4394-ab8c-316841705a26-kube-api-access-c9fch\") pod \"oauth-openshift-558db77b4-7gsrw\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.525029 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.538816 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xkk7n" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.544031 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.546722 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.553982 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-b55lq" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.562314 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.564204 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.567220 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.578439 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.581157 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6bfv4" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.585070 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.604045 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.624690 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.638514 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qx9mv" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.643526 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.663472 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.683420 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.703964 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hlntp"] Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.704447 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.705966 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-68qpw"] Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.722943 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.729600 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5cv2"] Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.730562 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nzfrn"] Feb 02 10:59:32 crc kubenswrapper[4925]: W0202 10:59:32.741533 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc71a086_3e99_48d7_99d8_5a08c0425e16.slice/crio-79c94448fb2695bc5ff519454a0864f39bc6082a8925f3255b34dfc0cff7d113 WatchSource:0}: Error finding container 79c94448fb2695bc5ff519454a0864f39bc6082a8925f3255b34dfc0cff7d113: Status 404 returned error can't find the container with id 79c94448fb2695bc5ff519454a0864f39bc6082a8925f3255b34dfc0cff7d113 Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.747359 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.764385 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.767731 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.772967 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mpgcb"] Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.782353 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n"] Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.784392 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.810714 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.825460 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 10:59:32 crc kubenswrapper[4925]: W0202 10:59:32.832971 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod124e0efd_2cce_4f29_bcc3_3d1a6fd5a62b.slice/crio-2a98f0c0a44e75c3a22ffe0c5ca60cc1812b5711b25507d203a96bba893193af WatchSource:0}: Error finding container 2a98f0c0a44e75c3a22ffe0c5ca60cc1812b5711b25507d203a96bba893193af: Status 404 returned error can't find the container with id 2a98f0c0a44e75c3a22ffe0c5ca60cc1812b5711b25507d203a96bba893193af Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.836016 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk"] Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.845420 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.864334 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 10:59:32 crc kubenswrapper[4925]: W0202 10:59:32.868728 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45405c2c_780c_4190_8cad_466ecfd84d2d.slice/crio-8068bf9024d08071cceaf918f898c741afe0a897b5c57dd66a2ece9b9873b4d5 WatchSource:0}: Error finding container 8068bf9024d08071cceaf918f898c741afe0a897b5c57dd66a2ece9b9873b4d5: Status 404 returned error can't find the container with id 8068bf9024d08071cceaf918f898c741afe0a897b5c57dd66a2ece9b9873b4d5 Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.886513 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.904625 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.923501 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.944462 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.962665 4925 request.go:700] Waited for 1.952338138s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.964510 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 10:59:32 crc kubenswrapper[4925]: I0202 10:59:32.983977 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.003463 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.044061 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.064809 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.079527 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xkk7n"] Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.083044 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.104089 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.124505 4925 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.143472 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.160308 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b55lq"] Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.163668 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.183199 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.203597 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.220173 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ss5z\" (UniqueName: \"kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-kube-api-access-2ss5z\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.220467 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/069939c4-f5e9-4dc7-891a-018a475a4871-machine-approver-tls\") pod \"machine-approver-56656f9798-ldjvb\" (UID: \"069939c4-f5e9-4dc7-891a-018a475a4871\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.220487 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/421043e2-e94a-4b1b-8571-ea62b753b06d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.220504 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/421043e2-e94a-4b1b-8571-ea62b753b06d-trusted-ca\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.220706 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-registry-tls\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.220759 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4zmm\" (UniqueName: \"kubernetes.io/projected/069939c4-f5e9-4dc7-891a-018a475a4871-kube-api-access-n4zmm\") pod \"machine-approver-56656f9798-ldjvb\" (UID: \"069939c4-f5e9-4dc7-891a-018a475a4871\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.220789 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/069939c4-f5e9-4dc7-891a-018a475a4871-config\") pod \"machine-approver-56656f9798-ldjvb\" (UID: \"069939c4-f5e9-4dc7-891a-018a475a4871\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.220818 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/069939c4-f5e9-4dc7-891a-018a475a4871-auth-proxy-config\") pod \"machine-approver-56656f9798-ldjvb\" (UID: \"069939c4-f5e9-4dc7-891a-018a475a4871\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.220879 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/421043e2-e94a-4b1b-8571-ea62b753b06d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.220905 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/421043e2-e94a-4b1b-8571-ea62b753b06d-registry-certificates\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.220919 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-bound-sa-token\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.220945 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: E0202 10:59:33.221272 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:33.721259263 +0000 UTC m=+150.725508225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:33 crc kubenswrapper[4925]: W0202 10:59:33.221511 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8af1feeb_ac81_4603_9e99_c06de71038f0.slice/crio-f63fc73d8029449d86edaf6188610d7b29bd88505af8f4402c344c760127d37a WatchSource:0}: Error finding container f63fc73d8029449d86edaf6188610d7b29bd88505af8f4402c344c760127d37a: Status 404 returned error can't find the container with id f63fc73d8029449d86edaf6188610d7b29bd88505af8f4402c344c760127d37a Feb 02 10:59:33 crc kubenswrapper[4925]: W0202 10:59:33.225240 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod670ef54d_fb71_49c9_930b_cae1088d828d.slice/crio-901628361f22963b515f754484d7928f9e0059a5fab73fbce6f5370fb6ba29dd WatchSource:0}: Error finding container 901628361f22963b515f754484d7928f9e0059a5fab73fbce6f5370fb6ba29dd: Status 404 returned error can't find the container with id 901628361f22963b515f754484d7928f9e0059a5fab73fbce6f5370fb6ba29dd Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.225420 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.263849 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.279628 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6bfv4"] Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.281134 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-nzwbr"] Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.282571 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz"] Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.283948 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.284001 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cmf26"] Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.285440 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qx9mv"] Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.304299 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.321778 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:33 crc kubenswrapper[4925]: E0202 10:59:33.321875 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:33.821860167 +0000 UTC m=+150.826109129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:33 crc kubenswrapper[4925]: W0202 10:59:33.321887 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c1d6c8a_41c7_48a0_853c_d1df60efb422.slice/crio-c41ec476ad4e74ffbf81e04c66c5166091e76a788065bf562dbea5e336a53578 WatchSource:0}: Error finding container c41ec476ad4e74ffbf81e04c66c5166091e76a788065bf562dbea5e336a53578: Status 404 returned error can't find the container with id c41ec476ad4e74ffbf81e04c66c5166091e76a788065bf562dbea5e336a53578 Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323276 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a87278a-c899-40df-99ef-324a5415be60-secret-volume\") pod \"collect-profiles-29500485-xsfgq\" (UID: \"3a87278a-c899-40df-99ef-324a5415be60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323326 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/39d5f083-20be-4cb1-9c72-d7a52d54a578-stats-auth\") pod \"router-default-5444994796-q87qm\" (UID: \"39d5f083-20be-4cb1-9c72-d7a52d54a578\") " pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323350 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0708251f-5f32-4341-9813-9f3e4f19b5e1-srv-cert\") pod \"olm-operator-6b444d44fb-sz8zf\" (UID: \"0708251f-5f32-4341-9813-9f3e4f19b5e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323371 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39d5f083-20be-4cb1-9c72-d7a52d54a578-service-ca-bundle\") pod \"router-default-5444994796-q87qm\" (UID: \"39d5f083-20be-4cb1-9c72-d7a52d54a578\") " pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323408 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/421043e2-e94a-4b1b-8571-ea62b753b06d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323441 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q7sf\" (UniqueName: \"kubernetes.io/projected/3a69a2f5-7410-4d20-8ced-74a165eb1e2e-kube-api-access-7q7sf\") pod \"service-ca-9c57cc56f-fsn6h\" (UID: \"3a69a2f5-7410-4d20-8ced-74a165eb1e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-fsn6h" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323462 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bqpt\" (UniqueName: \"kubernetes.io/projected/27e6145f-5037-4a7c-99dc-4c4abcede9e2-kube-api-access-9bqpt\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323487 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/421043e2-e94a-4b1b-8571-ea62b753b06d-registry-certificates\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323521 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r2qz\" (UniqueName: \"kubernetes.io/projected/a6100093-1950-472d-a74f-f0aac942416b-kube-api-access-4r2qz\") pod \"dns-operator-744455d44c-t6w8r\" (UID: \"a6100093-1950-472d-a74f-f0aac942416b\") " pod="openshift-dns-operator/dns-operator-744455d44c-t6w8r" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323542 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad7721d8-a99d-4aa4-a3d3-d6c42f813514-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-twjvl\" (UID: \"ad7721d8-a99d-4aa4-a3d3-d6c42f813514\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323561 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw447\" (UniqueName: \"kubernetes.io/projected/3a87278a-c899-40df-99ef-324a5415be60-kube-api-access-lw447\") pod \"collect-profiles-29500485-xsfgq\" (UID: \"3a87278a-c899-40df-99ef-324a5415be60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323576 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39d5f083-20be-4cb1-9c72-d7a52d54a578-metrics-certs\") pod \"router-default-5444994796-q87qm\" (UID: \"39d5f083-20be-4cb1-9c72-d7a52d54a578\") " pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323592 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c198ab6e-61b2-4041-8c36-daa58cdc0c9c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-w4klh\" (UID: \"c198ab6e-61b2-4041-8c36-daa58cdc0c9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w4klh" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323619 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97qvk\" (UniqueName: \"kubernetes.io/projected/0708251f-5f32-4341-9813-9f3e4f19b5e1-kube-api-access-97qvk\") pod \"olm-operator-6b444d44fb-sz8zf\" (UID: \"0708251f-5f32-4341-9813-9f3e4f19b5e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323636 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c389a4ac-ad09-48a6-8f22-716afcca72b1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m5lvb\" (UID: \"c389a4ac-ad09-48a6-8f22-716afcca72b1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323662 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz2jf\" (UniqueName: \"kubernetes.io/projected/8b008e52-e741-4db5-9b52-5aaef21ff009-kube-api-access-gz2jf\") pod \"ingress-canary-l76bs\" (UID: \"8b008e52-e741-4db5-9b52-5aaef21ff009\") " pod="openshift-ingress-canary/ingress-canary-l76bs" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323675 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6n4g\" (UniqueName: \"kubernetes.io/projected/3a169060-1fa6-45cc-9c6a-61f3f74ddd0b-kube-api-access-s6n4g\") pod \"catalog-operator-68c6474976-9cfdj\" (UID: \"3a169060-1fa6-45cc-9c6a-61f3f74ddd0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323691 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d699c1d-88f3-4375-9400-644dfd53edae-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bj99s\" (UID: \"3d699c1d-88f3-4375-9400-644dfd53edae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bj99s" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323710 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f6f710ee-2823-4865-890e-1506e7eca156-socket-dir\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323726 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/76d76de9-7fea-489e-9a4a-9498ac01041a-apiservice-cert\") pod \"packageserver-d55dfcdfc-v6b8t\" (UID: \"76d76de9-7fea-489e-9a4a-9498ac01041a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323743 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a6f28ef-1382-427a-a202-8f8559f74f94-proxy-tls\") pod \"machine-config-operator-74547568cd-gd5mx\" (UID: \"5a6f28ef-1382-427a-a202-8f8559f74f94\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323758 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4jld\" (UniqueName: \"kubernetes.io/projected/3d699c1d-88f3-4375-9400-644dfd53edae-kube-api-access-n4jld\") pod \"package-server-manager-789f6589d5-bj99s\" (UID: \"3d699c1d-88f3-4375-9400-644dfd53edae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bj99s" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323798 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c198ab6e-61b2-4041-8c36-daa58cdc0c9c-config\") pod \"kube-apiserver-operator-766d6c64bb-w4klh\" (UID: \"c198ab6e-61b2-4041-8c36-daa58cdc0c9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w4klh" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323812 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z67pl\" (UniqueName: \"kubernetes.io/projected/aaa7c8d8-41fd-4212-b022-6939cc27d1d3-kube-api-access-z67pl\") pod \"migrator-59844c95c7-cbbdl\" (UID: \"aaa7c8d8-41fd-4212-b022-6939cc27d1d3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbbdl" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323839 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/421043e2-e94a-4b1b-8571-ea62b753b06d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323855 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/069939c4-f5e9-4dc7-891a-018a475a4871-machine-approver-tls\") pod \"machine-approver-56656f9798-ldjvb\" (UID: \"069939c4-f5e9-4dc7-891a-018a475a4871\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323889 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76717f70-4bab-41ce-aa51-a0e4a634c248-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bxgj\" (UID: \"76717f70-4bab-41ce-aa51-a0e4a634c248\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bxgj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323918 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b2pc\" (UniqueName: \"kubernetes.io/projected/76717f70-4bab-41ce-aa51-a0e4a634c248-kube-api-access-7b2pc\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bxgj\" (UID: \"76717f70-4bab-41ce-aa51-a0e4a634c248\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bxgj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323937 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h8kb\" (UniqueName: \"kubernetes.io/projected/ad7721d8-a99d-4aa4-a3d3-d6c42f813514-kube-api-access-7h8kb\") pod \"machine-config-controller-84d6567774-twjvl\" (UID: \"ad7721d8-a99d-4aa4-a3d3-d6c42f813514\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323955 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f6f710ee-2823-4865-890e-1506e7eca156-plugins-dir\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.323972 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3a69a2f5-7410-4d20-8ced-74a165eb1e2e-signing-cabundle\") pod \"service-ca-9c57cc56f-fsn6h\" (UID: \"3a69a2f5-7410-4d20-8ced-74a165eb1e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-fsn6h" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324024 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6100093-1950-472d-a74f-f0aac942416b-metrics-tls\") pod \"dns-operator-744455d44c-t6w8r\" (UID: \"a6100093-1950-472d-a74f-f0aac942416b\") " pod="openshift-dns-operator/dns-operator-744455d44c-t6w8r" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324058 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-846p5\" (UniqueName: \"kubernetes.io/projected/39d5f083-20be-4cb1-9c72-d7a52d54a578-kube-api-access-846p5\") pod \"router-default-5444994796-q87qm\" (UID: \"39d5f083-20be-4cb1-9c72-d7a52d54a578\") " pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324080 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4zmm\" (UniqueName: \"kubernetes.io/projected/069939c4-f5e9-4dc7-891a-018a475a4871-kube-api-access-n4zmm\") pod \"machine-approver-56656f9798-ldjvb\" (UID: \"069939c4-f5e9-4dc7-891a-018a475a4871\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324096 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/404fe0cb-7979-4e18-8b34-24c961c0584b-serving-cert\") pod \"service-ca-operator-777779d784-sch2v\" (UID: \"404fe0cb-7979-4e18-8b34-24c961c0584b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sch2v" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324117 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d-node-bootstrap-token\") pod \"machine-config-server-5wtgq\" (UID: \"0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d\") " pod="openshift-machine-config-operator/machine-config-server-5wtgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324147 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a87278a-c899-40df-99ef-324a5415be60-config-volume\") pod \"collect-profiles-29500485-xsfgq\" (UID: \"3a87278a-c899-40df-99ef-324a5415be60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324188 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b008e52-e741-4db5-9b52-5aaef21ff009-cert\") pod \"ingress-canary-l76bs\" (UID: \"8b008e52-e741-4db5-9b52-5aaef21ff009\") " pod="openshift-ingress-canary/ingress-canary-l76bs" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324229 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/069939c4-f5e9-4dc7-891a-018a475a4871-auth-proxy-config\") pod \"machine-approver-56656f9798-ldjvb\" (UID: \"069939c4-f5e9-4dc7-891a-018a475a4871\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324252 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcs9p\" (UniqueName: \"kubernetes.io/projected/693d8818-a349-4e21-80cd-26caca3271b5-kube-api-access-dcs9p\") pod \"control-plane-machine-set-operator-78cbb6b69f-ghpq7\" (UID: \"693d8818-a349-4e21-80cd-26caca3271b5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghpq7" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324274 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/caf57c65-243f-462a-ac93-83d3740c4287-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zbmjc\" (UID: \"caf57c65-243f-462a-ac93-83d3740c4287\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zbmjc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324292 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404fe0cb-7979-4e18-8b34-24c961c0584b-config\") pod \"service-ca-operator-777779d784-sch2v\" (UID: \"404fe0cb-7979-4e18-8b34-24c961c0584b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sch2v" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324309 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/27e6145f-5037-4a7c-99dc-4c4abcede9e2-etcd-ca\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324364 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t77x8\" (UniqueName: \"kubernetes.io/projected/c389a4ac-ad09-48a6-8f22-716afcca72b1-kube-api-access-t77x8\") pod \"ingress-operator-5b745b69d9-m5lvb\" (UID: \"c389a4ac-ad09-48a6-8f22-716afcca72b1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324385 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34adde6a-61e7-4b44-9a72-72972f734a3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s4vpj\" (UID: \"34adde6a-61e7-4b44-9a72-72972f734a3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s4vpj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324401 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5wlx\" (UniqueName: \"kubernetes.io/projected/76d76de9-7fea-489e-9a4a-9498ac01041a-kube-api-access-f5wlx\") pod \"packageserver-d55dfcdfc-v6b8t\" (UID: \"76d76de9-7fea-489e-9a4a-9498ac01041a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324419 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f6f710ee-2823-4865-890e-1506e7eca156-registration-dir\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324465 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76717f70-4bab-41ce-aa51-a0e4a634c248-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bxgj\" (UID: \"76717f70-4bab-41ce-aa51-a0e4a634c248\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bxgj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324491 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a6f28ef-1382-427a-a202-8f8559f74f94-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gd5mx\" (UID: \"5a6f28ef-1382-427a-a202-8f8559f74f94\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324507 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3a169060-1fa6-45cc-9c6a-61f3f74ddd0b-profile-collector-cert\") pod \"catalog-operator-68c6474976-9cfdj\" (UID: \"3a169060-1fa6-45cc-9c6a-61f3f74ddd0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324569 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324587 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-bound-sa-token\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324604 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27e6145f-5037-4a7c-99dc-4c4abcede9e2-serving-cert\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324630 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9np62\" (UniqueName: \"kubernetes.io/projected/caf57c65-243f-462a-ac93-83d3740c4287-kube-api-access-9np62\") pod \"multus-admission-controller-857f4d67dd-zbmjc\" (UID: \"caf57c65-243f-462a-ac93-83d3740c4287\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zbmjc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324647 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a6f28ef-1382-427a-a202-8f8559f74f94-images\") pod \"machine-config-operator-74547568cd-gd5mx\" (UID: \"5a6f28ef-1382-427a-a202-8f8559f74f94\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324663 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e6145f-5037-4a7c-99dc-4c4abcede9e2-config\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324678 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/27e6145f-5037-4a7c-99dc-4c4abcede9e2-etcd-service-ca\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324694 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5cda6996-5995-4c69-888f-3a8838e992d9-metrics-tls\") pod \"dns-default-nnqww\" (UID: \"5cda6996-5995-4c69-888f-3a8838e992d9\") " pod="openshift-dns/dns-default-nnqww" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324710 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c389a4ac-ad09-48a6-8f22-716afcca72b1-trusted-ca\") pod \"ingress-operator-5b745b69d9-m5lvb\" (UID: \"c389a4ac-ad09-48a6-8f22-716afcca72b1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324754 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ad7721d8-a99d-4aa4-a3d3-d6c42f813514-proxy-tls\") pod \"machine-config-controller-84d6567774-twjvl\" (UID: \"ad7721d8-a99d-4aa4-a3d3-d6c42f813514\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324768 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/76d76de9-7fea-489e-9a4a-9498ac01041a-tmpfs\") pod \"packageserver-d55dfcdfc-v6b8t\" (UID: \"76d76de9-7fea-489e-9a4a-9498ac01041a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324784 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kn82k\" (UID: \"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324801 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdtwx\" (UniqueName: \"kubernetes.io/projected/0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d-kube-api-access-gdtwx\") pod \"machine-config-server-5wtgq\" (UID: \"0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d\") " pod="openshift-machine-config-operator/machine-config-server-5wtgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324818 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ss5z\" (UniqueName: \"kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-kube-api-access-2ss5z\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324834 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/76d76de9-7fea-489e-9a4a-9498ac01041a-webhook-cert\") pod \"packageserver-d55dfcdfc-v6b8t\" (UID: \"76d76de9-7fea-489e-9a4a-9498ac01041a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324850 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f6f710ee-2823-4865-890e-1506e7eca156-csi-data-dir\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324877 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/421043e2-e94a-4b1b-8571-ea62b753b06d-trusted-ca\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324894 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5xvf\" (UniqueName: \"kubernetes.io/projected/5a6f28ef-1382-427a-a202-8f8559f74f94-kube-api-access-q5xvf\") pod \"machine-config-operator-74547568cd-gd5mx\" (UID: \"5a6f28ef-1382-427a-a202-8f8559f74f94\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324909 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvqsn\" (UniqueName: \"kubernetes.io/projected/404fe0cb-7979-4e18-8b34-24c961c0584b-kube-api-access-pvqsn\") pod \"service-ca-operator-777779d784-sch2v\" (UID: \"404fe0cb-7979-4e18-8b34-24c961c0584b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sch2v" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324955 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0708251f-5f32-4341-9813-9f3e4f19b5e1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sz8zf\" (UID: \"0708251f-5f32-4341-9813-9f3e4f19b5e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324980 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c198ab6e-61b2-4041-8c36-daa58cdc0c9c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-w4klh\" (UID: \"c198ab6e-61b2-4041-8c36-daa58cdc0c9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w4klh" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324996 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3a69a2f5-7410-4d20-8ced-74a165eb1e2e-signing-key\") pod \"service-ca-9c57cc56f-fsn6h\" (UID: \"3a69a2f5-7410-4d20-8ced-74a165eb1e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-fsn6h" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.325011 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w67tf\" (UniqueName: \"kubernetes.io/projected/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-kube-api-access-w67tf\") pod \"marketplace-operator-79b997595-kn82k\" (UID: \"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.325025 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d-certs\") pod \"machine-config-server-5wtgq\" (UID: \"0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d\") " pod="openshift-machine-config-operator/machine-config-server-5wtgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.325040 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/39d5f083-20be-4cb1-9c72-d7a52d54a578-default-certificate\") pod \"router-default-5444994796-q87qm\" (UID: \"39d5f083-20be-4cb1-9c72-d7a52d54a578\") " pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.325061 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-registry-tls\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.325098 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f6f710ee-2823-4865-890e-1506e7eca156-mountpoint-dir\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.325159 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d774p\" (UniqueName: \"kubernetes.io/projected/f6f710ee-2823-4865-890e-1506e7eca156-kube-api-access-d774p\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.325177 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kn82k\" (UID: \"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324831 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/421043e2-e94a-4b1b-8571-ea62b753b06d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.324401 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.326205 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34adde6a-61e7-4b44-9a72-72972f734a3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s4vpj\" (UID: \"34adde6a-61e7-4b44-9a72-72972f734a3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s4vpj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.327579 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/27e6145f-5037-4a7c-99dc-4c4abcede9e2-etcd-client\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.327616 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/069939c4-f5e9-4dc7-891a-018a475a4871-config\") pod \"machine-approver-56656f9798-ldjvb\" (UID: \"069939c4-f5e9-4dc7-891a-018a475a4871\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.327660 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/693d8818-a349-4e21-80cd-26caca3271b5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ghpq7\" (UID: \"693d8818-a349-4e21-80cd-26caca3271b5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghpq7" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.327710 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5cda6996-5995-4c69-888f-3a8838e992d9-config-volume\") pod \"dns-default-nnqww\" (UID: \"5cda6996-5995-4c69-888f-3a8838e992d9\") " pod="openshift-dns/dns-default-nnqww" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.327727 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs9m6\" (UniqueName: \"kubernetes.io/projected/5cda6996-5995-4c69-888f-3a8838e992d9-kube-api-access-gs9m6\") pod \"dns-default-nnqww\" (UID: \"5cda6996-5995-4c69-888f-3a8838e992d9\") " pod="openshift-dns/dns-default-nnqww" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.327848 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3a169060-1fa6-45cc-9c6a-61f3f74ddd0b-srv-cert\") pod \"catalog-operator-68c6474976-9cfdj\" (UID: \"3a169060-1fa6-45cc-9c6a-61f3f74ddd0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.327904 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c389a4ac-ad09-48a6-8f22-716afcca72b1-metrics-tls\") pod \"ingress-operator-5b745b69d9-m5lvb\" (UID: \"c389a4ac-ad09-48a6-8f22-716afcca72b1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.327933 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34adde6a-61e7-4b44-9a72-72972f734a3c-config\") pod \"kube-controller-manager-operator-78b949d7b-s4vpj\" (UID: \"34adde6a-61e7-4b44-9a72-72972f734a3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s4vpj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.328832 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/421043e2-e94a-4b1b-8571-ea62b753b06d-trusted-ca\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.329819 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/069939c4-f5e9-4dc7-891a-018a475a4871-auth-proxy-config\") pod \"machine-approver-56656f9798-ldjvb\" (UID: \"069939c4-f5e9-4dc7-891a-018a475a4871\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.330576 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/069939c4-f5e9-4dc7-891a-018a475a4871-machine-approver-tls\") pod \"machine-approver-56656f9798-ldjvb\" (UID: \"069939c4-f5e9-4dc7-891a-018a475a4871\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.332482 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/069939c4-f5e9-4dc7-891a-018a475a4871-config\") pod \"machine-approver-56656f9798-ldjvb\" (UID: \"069939c4-f5e9-4dc7-891a-018a475a4871\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.333201 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/421043e2-e94a-4b1b-8571-ea62b753b06d-registry-certificates\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: E0202 10:59:33.333942 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:33.833927875 +0000 UTC m=+150.838176837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.338216 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-registry-tls\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.342184 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/421043e2-e94a-4b1b-8571-ea62b753b06d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.344725 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-v7bdd"] Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.355867 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7gsrw"] Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.378265 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-bound-sa-token\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.396777 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ss5z\" (UniqueName: \"kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-kube-api-access-2ss5z\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.422334 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4zmm\" (UniqueName: \"kubernetes.io/projected/069939c4-f5e9-4dc7-891a-018a475a4871-kube-api-access-n4zmm\") pod \"machine-approver-56656f9798-ldjvb\" (UID: \"069939c4-f5e9-4dc7-891a-018a475a4871\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.428661 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:33 crc kubenswrapper[4925]: E0202 10:59:33.428793 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:33.928763546 +0000 UTC m=+150.933012518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.428957 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/27e6145f-5037-4a7c-99dc-4c4abcede9e2-etcd-service-ca\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.428994 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e6145f-5037-4a7c-99dc-4c4abcede9e2-config\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429020 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5cda6996-5995-4c69-888f-3a8838e992d9-metrics-tls\") pod \"dns-default-nnqww\" (UID: \"5cda6996-5995-4c69-888f-3a8838e992d9\") " pod="openshift-dns/dns-default-nnqww" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429047 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c389a4ac-ad09-48a6-8f22-716afcca72b1-trusted-ca\") pod \"ingress-operator-5b745b69d9-m5lvb\" (UID: \"c389a4ac-ad09-48a6-8f22-716afcca72b1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429070 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ad7721d8-a99d-4aa4-a3d3-d6c42f813514-proxy-tls\") pod \"machine-config-controller-84d6567774-twjvl\" (UID: \"ad7721d8-a99d-4aa4-a3d3-d6c42f813514\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429108 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/76d76de9-7fea-489e-9a4a-9498ac01041a-tmpfs\") pod \"packageserver-d55dfcdfc-v6b8t\" (UID: \"76d76de9-7fea-489e-9a4a-9498ac01041a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429132 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/76d76de9-7fea-489e-9a4a-9498ac01041a-webhook-cert\") pod \"packageserver-d55dfcdfc-v6b8t\" (UID: \"76d76de9-7fea-489e-9a4a-9498ac01041a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429153 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f6f710ee-2823-4865-890e-1506e7eca156-csi-data-dir\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429173 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kn82k\" (UID: \"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429196 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdtwx\" (UniqueName: \"kubernetes.io/projected/0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d-kube-api-access-gdtwx\") pod \"machine-config-server-5wtgq\" (UID: \"0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d\") " pod="openshift-machine-config-operator/machine-config-server-5wtgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429221 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5xvf\" (UniqueName: \"kubernetes.io/projected/5a6f28ef-1382-427a-a202-8f8559f74f94-kube-api-access-q5xvf\") pod \"machine-config-operator-74547568cd-gd5mx\" (UID: \"5a6f28ef-1382-427a-a202-8f8559f74f94\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429243 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvqsn\" (UniqueName: \"kubernetes.io/projected/404fe0cb-7979-4e18-8b34-24c961c0584b-kube-api-access-pvqsn\") pod \"service-ca-operator-777779d784-sch2v\" (UID: \"404fe0cb-7979-4e18-8b34-24c961c0584b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sch2v" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429303 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0708251f-5f32-4341-9813-9f3e4f19b5e1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sz8zf\" (UID: \"0708251f-5f32-4341-9813-9f3e4f19b5e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429328 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c198ab6e-61b2-4041-8c36-daa58cdc0c9c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-w4klh\" (UID: \"c198ab6e-61b2-4041-8c36-daa58cdc0c9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w4klh" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429348 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3a69a2f5-7410-4d20-8ced-74a165eb1e2e-signing-key\") pod \"service-ca-9c57cc56f-fsn6h\" (UID: \"3a69a2f5-7410-4d20-8ced-74a165eb1e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-fsn6h" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429413 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w67tf\" (UniqueName: \"kubernetes.io/projected/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-kube-api-access-w67tf\") pod \"marketplace-operator-79b997595-kn82k\" (UID: \"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429433 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d-certs\") pod \"machine-config-server-5wtgq\" (UID: \"0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d\") " pod="openshift-machine-config-operator/machine-config-server-5wtgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429453 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/39d5f083-20be-4cb1-9c72-d7a52d54a578-default-certificate\") pod \"router-default-5444994796-q87qm\" (UID: \"39d5f083-20be-4cb1-9c72-d7a52d54a578\") " pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429475 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f6f710ee-2823-4865-890e-1506e7eca156-mountpoint-dir\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429541 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kn82k\" (UID: \"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429564 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d774p\" (UniqueName: \"kubernetes.io/projected/f6f710ee-2823-4865-890e-1506e7eca156-kube-api-access-d774p\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429588 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34adde6a-61e7-4b44-9a72-72972f734a3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s4vpj\" (UID: \"34adde6a-61e7-4b44-9a72-72972f734a3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s4vpj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429609 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/27e6145f-5037-4a7c-99dc-4c4abcede9e2-etcd-client\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429635 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/693d8818-a349-4e21-80cd-26caca3271b5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ghpq7\" (UID: \"693d8818-a349-4e21-80cd-26caca3271b5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghpq7" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429827 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/27e6145f-5037-4a7c-99dc-4c4abcede9e2-etcd-service-ca\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429841 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5cda6996-5995-4c69-888f-3a8838e992d9-config-volume\") pod \"dns-default-nnqww\" (UID: \"5cda6996-5995-4c69-888f-3a8838e992d9\") " pod="openshift-dns/dns-default-nnqww" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429926 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs9m6\" (UniqueName: \"kubernetes.io/projected/5cda6996-5995-4c69-888f-3a8838e992d9-kube-api-access-gs9m6\") pod \"dns-default-nnqww\" (UID: \"5cda6996-5995-4c69-888f-3a8838e992d9\") " pod="openshift-dns/dns-default-nnqww" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429956 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3a169060-1fa6-45cc-9c6a-61f3f74ddd0b-srv-cert\") pod \"catalog-operator-68c6474976-9cfdj\" (UID: \"3a169060-1fa6-45cc-9c6a-61f3f74ddd0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.429978 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c389a4ac-ad09-48a6-8f22-716afcca72b1-metrics-tls\") pod \"ingress-operator-5b745b69d9-m5lvb\" (UID: \"c389a4ac-ad09-48a6-8f22-716afcca72b1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.430000 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34adde6a-61e7-4b44-9a72-72972f734a3c-config\") pod \"kube-controller-manager-operator-78b949d7b-s4vpj\" (UID: \"34adde6a-61e7-4b44-9a72-72972f734a3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s4vpj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.430026 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a87278a-c899-40df-99ef-324a5415be60-secret-volume\") pod \"collect-profiles-29500485-xsfgq\" (UID: \"3a87278a-c899-40df-99ef-324a5415be60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.430047 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/39d5f083-20be-4cb1-9c72-d7a52d54a578-stats-auth\") pod \"router-default-5444994796-q87qm\" (UID: \"39d5f083-20be-4cb1-9c72-d7a52d54a578\") " pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.430069 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39d5f083-20be-4cb1-9c72-d7a52d54a578-service-ca-bundle\") pod \"router-default-5444994796-q87qm\" (UID: \"39d5f083-20be-4cb1-9c72-d7a52d54a578\") " pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.430096 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0708251f-5f32-4341-9813-9f3e4f19b5e1-srv-cert\") pod \"olm-operator-6b444d44fb-sz8zf\" (UID: \"0708251f-5f32-4341-9813-9f3e4f19b5e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.430135 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q7sf\" (UniqueName: \"kubernetes.io/projected/3a69a2f5-7410-4d20-8ced-74a165eb1e2e-kube-api-access-7q7sf\") pod \"service-ca-9c57cc56f-fsn6h\" (UID: \"3a69a2f5-7410-4d20-8ced-74a165eb1e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-fsn6h" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.430159 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bqpt\" (UniqueName: \"kubernetes.io/projected/27e6145f-5037-4a7c-99dc-4c4abcede9e2-kube-api-access-9bqpt\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.430182 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad7721d8-a99d-4aa4-a3d3-d6c42f813514-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-twjvl\" (UID: \"ad7721d8-a99d-4aa4-a3d3-d6c42f813514\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.430203 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw447\" (UniqueName: \"kubernetes.io/projected/3a87278a-c899-40df-99ef-324a5415be60-kube-api-access-lw447\") pod \"collect-profiles-29500485-xsfgq\" (UID: \"3a87278a-c899-40df-99ef-324a5415be60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.430225 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39d5f083-20be-4cb1-9c72-d7a52d54a578-metrics-certs\") pod \"router-default-5444994796-q87qm\" (UID: \"39d5f083-20be-4cb1-9c72-d7a52d54a578\") " pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.432112 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f6f710ee-2823-4865-890e-1506e7eca156-mountpoint-dir\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.433723 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f6f710ee-2823-4865-890e-1506e7eca156-csi-data-dir\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.433961 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r2qz\" (UniqueName: \"kubernetes.io/projected/a6100093-1950-472d-a74f-f0aac942416b-kube-api-access-4r2qz\") pod \"dns-operator-744455d44c-t6w8r\" (UID: \"a6100093-1950-472d-a74f-f0aac942416b\") " pod="openshift-dns-operator/dns-operator-744455d44c-t6w8r" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434008 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c198ab6e-61b2-4041-8c36-daa58cdc0c9c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-w4klh\" (UID: \"c198ab6e-61b2-4041-8c36-daa58cdc0c9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w4klh" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434033 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97qvk\" (UniqueName: \"kubernetes.io/projected/0708251f-5f32-4341-9813-9f3e4f19b5e1-kube-api-access-97qvk\") pod \"olm-operator-6b444d44fb-sz8zf\" (UID: \"0708251f-5f32-4341-9813-9f3e4f19b5e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434053 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c389a4ac-ad09-48a6-8f22-716afcca72b1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m5lvb\" (UID: \"c389a4ac-ad09-48a6-8f22-716afcca72b1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434082 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz2jf\" (UniqueName: \"kubernetes.io/projected/8b008e52-e741-4db5-9b52-5aaef21ff009-kube-api-access-gz2jf\") pod \"ingress-canary-l76bs\" (UID: \"8b008e52-e741-4db5-9b52-5aaef21ff009\") " pod="openshift-ingress-canary/ingress-canary-l76bs" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434104 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6n4g\" (UniqueName: \"kubernetes.io/projected/3a169060-1fa6-45cc-9c6a-61f3f74ddd0b-kube-api-access-s6n4g\") pod \"catalog-operator-68c6474976-9cfdj\" (UID: \"3a169060-1fa6-45cc-9c6a-61f3f74ddd0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434147 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d699c1d-88f3-4375-9400-644dfd53edae-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bj99s\" (UID: \"3d699c1d-88f3-4375-9400-644dfd53edae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bj99s" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434171 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f6f710ee-2823-4865-890e-1506e7eca156-socket-dir\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434215 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a6f28ef-1382-427a-a202-8f8559f74f94-proxy-tls\") pod \"machine-config-operator-74547568cd-gd5mx\" (UID: \"5a6f28ef-1382-427a-a202-8f8559f74f94\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434242 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/76d76de9-7fea-489e-9a4a-9498ac01041a-apiservice-cert\") pod \"packageserver-d55dfcdfc-v6b8t\" (UID: \"76d76de9-7fea-489e-9a4a-9498ac01041a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434270 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4jld\" (UniqueName: \"kubernetes.io/projected/3d699c1d-88f3-4375-9400-644dfd53edae-kube-api-access-n4jld\") pod \"package-server-manager-789f6589d5-bj99s\" (UID: \"3d699c1d-88f3-4375-9400-644dfd53edae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bj99s" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434250 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34adde6a-61e7-4b44-9a72-72972f734a3c-config\") pod \"kube-controller-manager-operator-78b949d7b-s4vpj\" (UID: \"34adde6a-61e7-4b44-9a72-72972f734a3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s4vpj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434293 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c198ab6e-61b2-4041-8c36-daa58cdc0c9c-config\") pod \"kube-apiserver-operator-766d6c64bb-w4klh\" (UID: \"c198ab6e-61b2-4041-8c36-daa58cdc0c9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w4klh" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434354 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z67pl\" (UniqueName: \"kubernetes.io/projected/aaa7c8d8-41fd-4212-b022-6939cc27d1d3-kube-api-access-z67pl\") pod \"migrator-59844c95c7-cbbdl\" (UID: \"aaa7c8d8-41fd-4212-b022-6939cc27d1d3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbbdl" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434402 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76717f70-4bab-41ce-aa51-a0e4a634c248-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bxgj\" (UID: \"76717f70-4bab-41ce-aa51-a0e4a634c248\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bxgj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434446 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b2pc\" (UniqueName: \"kubernetes.io/projected/76717f70-4bab-41ce-aa51-a0e4a634c248-kube-api-access-7b2pc\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bxgj\" (UID: \"76717f70-4bab-41ce-aa51-a0e4a634c248\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bxgj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434478 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6100093-1950-472d-a74f-f0aac942416b-metrics-tls\") pod \"dns-operator-744455d44c-t6w8r\" (UID: \"a6100093-1950-472d-a74f-f0aac942416b\") " pod="openshift-dns-operator/dns-operator-744455d44c-t6w8r" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434504 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h8kb\" (UniqueName: \"kubernetes.io/projected/ad7721d8-a99d-4aa4-a3d3-d6c42f813514-kube-api-access-7h8kb\") pod \"machine-config-controller-84d6567774-twjvl\" (UID: \"ad7721d8-a99d-4aa4-a3d3-d6c42f813514\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434533 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f6f710ee-2823-4865-890e-1506e7eca156-plugins-dir\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434567 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3a69a2f5-7410-4d20-8ced-74a165eb1e2e-signing-cabundle\") pod \"service-ca-9c57cc56f-fsn6h\" (UID: \"3a69a2f5-7410-4d20-8ced-74a165eb1e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-fsn6h" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434644 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/404fe0cb-7979-4e18-8b34-24c961c0584b-serving-cert\") pod \"service-ca-operator-777779d784-sch2v\" (UID: \"404fe0cb-7979-4e18-8b34-24c961c0584b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sch2v" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434668 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d-node-bootstrap-token\") pod \"machine-config-server-5wtgq\" (UID: \"0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d\") " pod="openshift-machine-config-operator/machine-config-server-5wtgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434691 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-846p5\" (UniqueName: \"kubernetes.io/projected/39d5f083-20be-4cb1-9c72-d7a52d54a578-kube-api-access-846p5\") pod \"router-default-5444994796-q87qm\" (UID: \"39d5f083-20be-4cb1-9c72-d7a52d54a578\") " pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434718 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b008e52-e741-4db5-9b52-5aaef21ff009-cert\") pod \"ingress-canary-l76bs\" (UID: \"8b008e52-e741-4db5-9b52-5aaef21ff009\") " pod="openshift-ingress-canary/ingress-canary-l76bs" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434746 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a87278a-c899-40df-99ef-324a5415be60-config-volume\") pod \"collect-profiles-29500485-xsfgq\" (UID: \"3a87278a-c899-40df-99ef-324a5415be60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434781 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcs9p\" (UniqueName: \"kubernetes.io/projected/693d8818-a349-4e21-80cd-26caca3271b5-kube-api-access-dcs9p\") pod \"control-plane-machine-set-operator-78cbb6b69f-ghpq7\" (UID: \"693d8818-a349-4e21-80cd-26caca3271b5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghpq7" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434813 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/caf57c65-243f-462a-ac93-83d3740c4287-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zbmjc\" (UID: \"caf57c65-243f-462a-ac93-83d3740c4287\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zbmjc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434835 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404fe0cb-7979-4e18-8b34-24c961c0584b-config\") pod \"service-ca-operator-777779d784-sch2v\" (UID: \"404fe0cb-7979-4e18-8b34-24c961c0584b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sch2v" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434863 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/27e6145f-5037-4a7c-99dc-4c4abcede9e2-etcd-ca\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434889 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t77x8\" (UniqueName: \"kubernetes.io/projected/c389a4ac-ad09-48a6-8f22-716afcca72b1-kube-api-access-t77x8\") pod \"ingress-operator-5b745b69d9-m5lvb\" (UID: \"c389a4ac-ad09-48a6-8f22-716afcca72b1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434918 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34adde6a-61e7-4b44-9a72-72972f734a3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s4vpj\" (UID: \"34adde6a-61e7-4b44-9a72-72972f734a3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s4vpj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.434970 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5wlx\" (UniqueName: \"kubernetes.io/projected/76d76de9-7fea-489e-9a4a-9498ac01041a-kube-api-access-f5wlx\") pod \"packageserver-d55dfcdfc-v6b8t\" (UID: \"76d76de9-7fea-489e-9a4a-9498ac01041a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.435002 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f6f710ee-2823-4865-890e-1506e7eca156-registration-dir\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.435032 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76717f70-4bab-41ce-aa51-a0e4a634c248-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bxgj\" (UID: \"76717f70-4bab-41ce-aa51-a0e4a634c248\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bxgj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.435065 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a6f28ef-1382-427a-a202-8f8559f74f94-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gd5mx\" (UID: \"5a6f28ef-1382-427a-a202-8f8559f74f94\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.435092 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3a169060-1fa6-45cc-9c6a-61f3f74ddd0b-profile-collector-cert\") pod \"catalog-operator-68c6474976-9cfdj\" (UID: \"3a169060-1fa6-45cc-9c6a-61f3f74ddd0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.435473 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.435533 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27e6145f-5037-4a7c-99dc-4c4abcede9e2-serving-cert\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.435629 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9np62\" (UniqueName: \"kubernetes.io/projected/caf57c65-243f-462a-ac93-83d3740c4287-kube-api-access-9np62\") pod \"multus-admission-controller-857f4d67dd-zbmjc\" (UID: \"caf57c65-243f-462a-ac93-83d3740c4287\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zbmjc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.435664 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a6f28ef-1382-427a-a202-8f8559f74f94-images\") pod \"machine-config-operator-74547568cd-gd5mx\" (UID: \"5a6f28ef-1382-427a-a202-8f8559f74f94\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.436540 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a6f28ef-1382-427a-a202-8f8559f74f94-images\") pod \"machine-config-operator-74547568cd-gd5mx\" (UID: \"5a6f28ef-1382-427a-a202-8f8559f74f94\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.436980 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39d5f083-20be-4cb1-9c72-d7a52d54a578-service-ca-bundle\") pod \"router-default-5444994796-q87qm\" (UID: \"39d5f083-20be-4cb1-9c72-d7a52d54a578\") " pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.437364 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5cda6996-5995-4c69-888f-3a8838e992d9-config-volume\") pod \"dns-default-nnqww\" (UID: \"5cda6996-5995-4c69-888f-3a8838e992d9\") " pod="openshift-dns/dns-default-nnqww" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.438003 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kn82k\" (UID: \"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.442102 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/76d76de9-7fea-489e-9a4a-9498ac01041a-tmpfs\") pod \"packageserver-d55dfcdfc-v6b8t\" (UID: \"76d76de9-7fea-489e-9a4a-9498ac01041a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.442315 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c389a4ac-ad09-48a6-8f22-716afcca72b1-trusted-ca\") pod \"ingress-operator-5b745b69d9-m5lvb\" (UID: \"c389a4ac-ad09-48a6-8f22-716afcca72b1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.442334 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c198ab6e-61b2-4041-8c36-daa58cdc0c9c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-w4klh\" (UID: \"c198ab6e-61b2-4041-8c36-daa58cdc0c9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w4klh" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.442889 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/39d5f083-20be-4cb1-9c72-d7a52d54a578-default-certificate\") pod \"router-default-5444994796-q87qm\" (UID: \"39d5f083-20be-4cb1-9c72-d7a52d54a578\") " pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.443988 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a87278a-c899-40df-99ef-324a5415be60-config-volume\") pod \"collect-profiles-29500485-xsfgq\" (UID: \"3a87278a-c899-40df-99ef-324a5415be60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.444013 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c389a4ac-ad09-48a6-8f22-716afcca72b1-metrics-tls\") pod \"ingress-operator-5b745b69d9-m5lvb\" (UID: \"c389a4ac-ad09-48a6-8f22-716afcca72b1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.444382 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e6145f-5037-4a7c-99dc-4c4abcede9e2-config\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.445395 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad7721d8-a99d-4aa4-a3d3-d6c42f813514-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-twjvl\" (UID: \"ad7721d8-a99d-4aa4-a3d3-d6c42f813514\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.446034 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0708251f-5f32-4341-9813-9f3e4f19b5e1-srv-cert\") pod \"olm-operator-6b444d44fb-sz8zf\" (UID: \"0708251f-5f32-4341-9813-9f3e4f19b5e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.446693 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3a169060-1fa6-45cc-9c6a-61f3f74ddd0b-srv-cert\") pod \"catalog-operator-68c6474976-9cfdj\" (UID: \"3a169060-1fa6-45cc-9c6a-61f3f74ddd0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.447140 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5cda6996-5995-4c69-888f-3a8838e992d9-metrics-tls\") pod \"dns-default-nnqww\" (UID: \"5cda6996-5995-4c69-888f-3a8838e992d9\") " pod="openshift-dns/dns-default-nnqww" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.447807 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39d5f083-20be-4cb1-9c72-d7a52d54a578-metrics-certs\") pod \"router-default-5444994796-q87qm\" (UID: \"39d5f083-20be-4cb1-9c72-d7a52d54a578\") " pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.448482 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d-certs\") pod \"machine-config-server-5wtgq\" (UID: \"0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d\") " pod="openshift-machine-config-operator/machine-config-server-5wtgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.448980 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/404fe0cb-7979-4e18-8b34-24c961c0584b-serving-cert\") pod \"service-ca-operator-777779d784-sch2v\" (UID: \"404fe0cb-7979-4e18-8b34-24c961c0584b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sch2v" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.449618 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/76d76de9-7fea-489e-9a4a-9498ac01041a-webhook-cert\") pod \"packageserver-d55dfcdfc-v6b8t\" (UID: \"76d76de9-7fea-489e-9a4a-9498ac01041a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.450923 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ad7721d8-a99d-4aa4-a3d3-d6c42f813514-proxy-tls\") pod \"machine-config-controller-84d6567774-twjvl\" (UID: \"ad7721d8-a99d-4aa4-a3d3-d6c42f813514\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.451027 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f6f710ee-2823-4865-890e-1506e7eca156-plugins-dir\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.451551 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f6f710ee-2823-4865-890e-1506e7eca156-socket-dir\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: E0202 10:59:33.452715 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:33.952692048 +0000 UTC m=+150.956941010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.455299 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c198ab6e-61b2-4041-8c36-daa58cdc0c9c-config\") pod \"kube-apiserver-operator-766d6c64bb-w4klh\" (UID: \"c198ab6e-61b2-4041-8c36-daa58cdc0c9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w4klh" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.455304 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3a69a2f5-7410-4d20-8ced-74a165eb1e2e-signing-key\") pod \"service-ca-9c57cc56f-fsn6h\" (UID: \"3a69a2f5-7410-4d20-8ced-74a165eb1e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-fsn6h" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.455398 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f6f710ee-2823-4865-890e-1506e7eca156-registration-dir\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.455928 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76717f70-4bab-41ce-aa51-a0e4a634c248-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bxgj\" (UID: \"76717f70-4bab-41ce-aa51-a0e4a634c248\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bxgj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.456035 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3a69a2f5-7410-4d20-8ced-74a165eb1e2e-signing-cabundle\") pod \"service-ca-9c57cc56f-fsn6h\" (UID: \"3a69a2f5-7410-4d20-8ced-74a165eb1e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-fsn6h" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.456717 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a6f28ef-1382-427a-a202-8f8559f74f94-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gd5mx\" (UID: \"5a6f28ef-1382-427a-a202-8f8559f74f94\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.458747 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0708251f-5f32-4341-9813-9f3e4f19b5e1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sz8zf\" (UID: \"0708251f-5f32-4341-9813-9f3e4f19b5e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.465607 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b008e52-e741-4db5-9b52-5aaef21ff009-cert\") pod \"ingress-canary-l76bs\" (UID: \"8b008e52-e741-4db5-9b52-5aaef21ff009\") " pod="openshift-ingress-canary/ingress-canary-l76bs" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.466215 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76717f70-4bab-41ce-aa51-a0e4a634c248-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bxgj\" (UID: \"76717f70-4bab-41ce-aa51-a0e4a634c248\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bxgj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.466254 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/caf57c65-243f-462a-ac93-83d3740c4287-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zbmjc\" (UID: \"caf57c65-243f-462a-ac93-83d3740c4287\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zbmjc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.466953 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a87278a-c899-40df-99ef-324a5415be60-secret-volume\") pod \"collect-profiles-29500485-xsfgq\" (UID: \"3a87278a-c899-40df-99ef-324a5415be60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.468219 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6100093-1950-472d-a74f-f0aac942416b-metrics-tls\") pod \"dns-operator-744455d44c-t6w8r\" (UID: \"a6100093-1950-472d-a74f-f0aac942416b\") " pod="openshift-dns-operator/dns-operator-744455d44c-t6w8r" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.468462 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/693d8818-a349-4e21-80cd-26caca3271b5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ghpq7\" (UID: \"693d8818-a349-4e21-80cd-26caca3271b5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghpq7" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.468939 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34adde6a-61e7-4b44-9a72-72972f734a3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s4vpj\" (UID: \"34adde6a-61e7-4b44-9a72-72972f734a3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s4vpj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.470193 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/76d76de9-7fea-489e-9a4a-9498ac01041a-apiservice-cert\") pod \"packageserver-d55dfcdfc-v6b8t\" (UID: \"76d76de9-7fea-489e-9a4a-9498ac01041a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.470350 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a6f28ef-1382-427a-a202-8f8559f74f94-proxy-tls\") pod \"machine-config-operator-74547568cd-gd5mx\" (UID: \"5a6f28ef-1382-427a-a202-8f8559f74f94\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.470526 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d699c1d-88f3-4375-9400-644dfd53edae-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bj99s\" (UID: \"3d699c1d-88f3-4375-9400-644dfd53edae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bj99s" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.470713 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d-node-bootstrap-token\") pod \"machine-config-server-5wtgq\" (UID: \"0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d\") " pod="openshift-machine-config-operator/machine-config-server-5wtgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.471237 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3a169060-1fa6-45cc-9c6a-61f3f74ddd0b-profile-collector-cert\") pod \"catalog-operator-68c6474976-9cfdj\" (UID: \"3a169060-1fa6-45cc-9c6a-61f3f74ddd0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.471314 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs9m6\" (UniqueName: \"kubernetes.io/projected/5cda6996-5995-4c69-888f-3a8838e992d9-kube-api-access-gs9m6\") pod \"dns-default-nnqww\" (UID: \"5cda6996-5995-4c69-888f-3a8838e992d9\") " pod="openshift-dns/dns-default-nnqww" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.478909 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/39d5f083-20be-4cb1-9c72-d7a52d54a578-stats-auth\") pod \"router-default-5444994796-q87qm\" (UID: \"39d5f083-20be-4cb1-9c72-d7a52d54a578\") " pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.490050 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27e6145f-5037-4a7c-99dc-4c4abcede9e2-serving-cert\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.490203 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/27e6145f-5037-4a7c-99dc-4c4abcede9e2-etcd-ca\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.490291 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/27e6145f-5037-4a7c-99dc-4c4abcede9e2-etcd-client\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.490832 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404fe0cb-7979-4e18-8b34-24c961c0584b-config\") pod \"service-ca-operator-777779d784-sch2v\" (UID: \"404fe0cb-7979-4e18-8b34-24c961c0584b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sch2v" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.492558 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kn82k\" (UID: \"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.493782 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bqpt\" (UniqueName: \"kubernetes.io/projected/27e6145f-5037-4a7c-99dc-4c4abcede9e2-kube-api-access-9bqpt\") pod \"etcd-operator-b45778765-d7hhc\" (UID: \"27e6145f-5037-4a7c-99dc-4c4abcede9e2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.498318 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w67tf\" (UniqueName: \"kubernetes.io/projected/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-kube-api-access-w67tf\") pod \"marketplace-operator-79b997595-kn82k\" (UID: \"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4\") " pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.498490 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.511557 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5cv2" event={"ID":"dc71a086-3e99-48d7-99d8-5a08c0425e16","Type":"ContainerStarted","Data":"79c94448fb2695bc5ff519454a0864f39bc6082a8925f3255b34dfc0cff7d113"} Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.512682 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" event={"ID":"45405c2c-780c-4190-8cad-466ecfd84d2d","Type":"ContainerStarted","Data":"8068bf9024d08071cceaf918f898c741afe0a897b5c57dd66a2ece9b9873b4d5"} Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.513471 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-b55lq" event={"ID":"670ef54d-fb71-49c9-930b-cae1088d828d","Type":"ContainerStarted","Data":"901628361f22963b515f754484d7928f9e0059a5fab73fbce6f5370fb6ba29dd"} Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.514063 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" event={"ID":"0896c0b8-88f0-41d3-a630-9098a1bf6be7","Type":"ContainerStarted","Data":"b3f4db2a401bd2f90eec93d9bdfed5ffdcedb4f32bd15496e4da9c45b3930fe9"} Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.514944 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" event={"ID":"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b","Type":"ContainerStarted","Data":"2a98f0c0a44e75c3a22ffe0c5ca60cc1812b5711b25507d203a96bba893193af"} Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.515916 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" event={"ID":"0542f8d3-1555-4a7c-9c54-e3c075841559","Type":"ContainerStarted","Data":"e55596a6e0c17e017fb49adebd99b4921b891861b292abaff5c40858d17760e5"} Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.519068 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz" event={"ID":"7babde6f-c3db-4e20-9871-e2b8da06c334","Type":"ContainerStarted","Data":"12794ca4fc1da342fe7afbdaa7f189327aaab0d188c2a91b29ba11da5a7e3c78"} Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.522031 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdtwx\" (UniqueName: \"kubernetes.io/projected/0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d-kube-api-access-gdtwx\") pod \"machine-config-server-5wtgq\" (UID: \"0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d\") " pod="openshift-machine-config-operator/machine-config-server-5wtgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.522453 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nzfrn" event={"ID":"a284e563-4e19-4602-8475-282ed1c71e23","Type":"ContainerStarted","Data":"72ffa5d423b608df26e2aba39813c75d7fb5ca4335fb532b8802f968db4054f1"} Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.524525 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-68qpw" event={"ID":"628e72ad-1a83-4e42-a5bd-3ab0c710993e","Type":"ContainerStarted","Data":"70c0422d504da56d8e51c91693e238cf830bd9950c7678aa25b8b4332eafc108"} Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.526140 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qx9mv" event={"ID":"2c1d6c8a-41c7-48a0-853c-d1df60efb422","Type":"ContainerStarted","Data":"c41ec476ad4e74ffbf81e04c66c5166091e76a788065bf562dbea5e336a53578"} Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.528083 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xkk7n" event={"ID":"8af1feeb-ac81-4603-9e99-c06de71038f0","Type":"ContainerStarted","Data":"f63fc73d8029449d86edaf6188610d7b29bd88505af8f4402c344c760127d37a"} Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.529458 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6bfv4" event={"ID":"e04827b9-eb5b-4326-a0d3-297e3fec4eef","Type":"ContainerStarted","Data":"ba0a6f9ac90ab68daaf28dc73b6b1694d76d6bfe5fd7b93cd5f0964c46e8c37c"} Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.530586 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" event={"ID":"5da7ca31-35e0-47b3-a877-63d50ed68d70","Type":"ContainerStarted","Data":"c65d7b31299d048e9cf3940e1f69c897f8dda04baad62101b69aabcd13152ea2"} Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.531919 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" event={"ID":"686d6cf9-761e-4394-ab8c-316841705a26","Type":"ContainerStarted","Data":"3dad6c612e79b4f43d13ca6bbc05b5af3d0be24315a5a57bcd98535006011687"} Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.532960 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nzwbr" event={"ID":"81af45ef-2049-4155-9c0b-ae722e6b8c8a","Type":"ContainerStarted","Data":"206c0cabd9a7fe7bb12dafe0a1faf7730ab3796a5ad870dd401211c46d371b02"} Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.534538 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" event={"ID":"e9fea0fb-a1ca-4ab8-a1fc-92673a76105e","Type":"ContainerStarted","Data":"f32c781bc80d5b433bd13c2bc28a6328c75a67639d7ac2b061cdbef428421c0f"} Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.536838 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:33 crc kubenswrapper[4925]: E0202 10:59:33.537123 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:34.037101524 +0000 UTC m=+151.041350486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.537319 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: E0202 10:59:33.537668 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:34.037653569 +0000 UTC m=+151.041902531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.539841 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q7sf\" (UniqueName: \"kubernetes.io/projected/3a69a2f5-7410-4d20-8ced-74a165eb1e2e-kube-api-access-7q7sf\") pod \"service-ca-9c57cc56f-fsn6h\" (UID: \"3a69a2f5-7410-4d20-8ced-74a165eb1e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-fsn6h" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.556881 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d774p\" (UniqueName: \"kubernetes.io/projected/f6f710ee-2823-4865-890e-1506e7eca156-kube-api-access-d774p\") pod \"csi-hostpathplugin-jhw58\" (UID: \"f6f710ee-2823-4865-890e-1506e7eca156\") " pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.577370 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5xvf\" (UniqueName: \"kubernetes.io/projected/5a6f28ef-1382-427a-a202-8f8559f74f94-kube-api-access-q5xvf\") pod \"machine-config-operator-74547568cd-gd5mx\" (UID: \"5a6f28ef-1382-427a-a202-8f8559f74f94\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" Feb 02 10:59:33 crc kubenswrapper[4925]: W0202 10:59:33.594623 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod069939c4_f5e9_4dc7_891a_018a475a4871.slice/crio-0e713a8a3df136479d8a5452737d83c437cf3955c8bb06e2d42ad13ed87766d4 WatchSource:0}: Error finding container 0e713a8a3df136479d8a5452737d83c437cf3955c8bb06e2d42ad13ed87766d4: Status 404 returned error can't find the container with id 0e713a8a3df136479d8a5452737d83c437cf3955c8bb06e2d42ad13ed87766d4 Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.598541 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvqsn\" (UniqueName: \"kubernetes.io/projected/404fe0cb-7979-4e18-8b34-24c961c0584b-kube-api-access-pvqsn\") pod \"service-ca-operator-777779d784-sch2v\" (UID: \"404fe0cb-7979-4e18-8b34-24c961c0584b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sch2v" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.617605 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz2jf\" (UniqueName: \"kubernetes.io/projected/8b008e52-e741-4db5-9b52-5aaef21ff009-kube-api-access-gz2jf\") pod \"ingress-canary-l76bs\" (UID: \"8b008e52-e741-4db5-9b52-5aaef21ff009\") " pod="openshift-ingress-canary/ingress-canary-l76bs" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.621362 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.636846 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fsn6h" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.638692 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:33 crc kubenswrapper[4925]: E0202 10:59:33.638921 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:34.138889859 +0000 UTC m=+151.143138841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.639636 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: E0202 10:59:33.640003 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:34.139990038 +0000 UTC m=+151.144239000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.646748 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw447\" (UniqueName: \"kubernetes.io/projected/3a87278a-c899-40df-99ef-324a5415be60-kube-api-access-lw447\") pod \"collect-profiles-29500485-xsfgq\" (UID: \"3a87278a-c899-40df-99ef-324a5415be60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.646753 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5wtgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.655493 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-l76bs" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.656632 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z67pl\" (UniqueName: \"kubernetes.io/projected/aaa7c8d8-41fd-4212-b022-6939cc27d1d3-kube-api-access-z67pl\") pod \"migrator-59844c95c7-cbbdl\" (UID: \"aaa7c8d8-41fd-4212-b022-6939cc27d1d3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbbdl" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.677467 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jhw58" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.677842 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r2qz\" (UniqueName: \"kubernetes.io/projected/a6100093-1950-472d-a74f-f0aac942416b-kube-api-access-4r2qz\") pod \"dns-operator-744455d44c-t6w8r\" (UID: \"a6100093-1950-472d-a74f-f0aac942416b\") " pod="openshift-dns-operator/dns-operator-744455d44c-t6w8r" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.687134 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nnqww" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.696677 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c198ab6e-61b2-4041-8c36-daa58cdc0c9c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-w4klh\" (UID: \"c198ab6e-61b2-4041-8c36-daa58cdc0c9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w4klh" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.720671 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b2pc\" (UniqueName: \"kubernetes.io/projected/76717f70-4bab-41ce-aa51-a0e4a634c248-kube-api-access-7b2pc\") pod \"kube-storage-version-migrator-operator-b67b599dd-6bxgj\" (UID: \"76717f70-4bab-41ce-aa51-a0e4a634c248\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bxgj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.742613 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.742659 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c389a4ac-ad09-48a6-8f22-716afcca72b1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m5lvb\" (UID: \"c389a4ac-ad09-48a6-8f22-716afcca72b1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" Feb 02 10:59:33 crc kubenswrapper[4925]: E0202 10:59:33.742787 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:34.242760959 +0000 UTC m=+151.247009981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.742882 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: E0202 10:59:33.743282 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:34.243265092 +0000 UTC m=+151.247514054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.760798 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcs9p\" (UniqueName: \"kubernetes.io/projected/693d8818-a349-4e21-80cd-26caca3271b5-kube-api-access-dcs9p\") pod \"control-plane-machine-set-operator-78cbb6b69f-ghpq7\" (UID: \"693d8818-a349-4e21-80cd-26caca3271b5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghpq7" Feb 02 10:59:33 crc kubenswrapper[4925]: W0202 10:59:33.768733 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bcbcbcb_e19a_42dc_bc1f_01ce9f36432d.slice/crio-8e438b58bb964a5eb6124c31a7bf191fc038279fe1c8037d155a94fca71f6e77 WatchSource:0}: Error finding container 8e438b58bb964a5eb6124c31a7bf191fc038279fe1c8037d155a94fca71f6e77: Status 404 returned error can't find the container with id 8e438b58bb964a5eb6124c31a7bf191fc038279fe1c8037d155a94fca71f6e77 Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.779339 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97qvk\" (UniqueName: \"kubernetes.io/projected/0708251f-5f32-4341-9813-9f3e4f19b5e1-kube-api-access-97qvk\") pod \"olm-operator-6b444d44fb-sz8zf\" (UID: \"0708251f-5f32-4341-9813-9f3e4f19b5e1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.788544 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.801802 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.807460 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-846p5\" (UniqueName: \"kubernetes.io/projected/39d5f083-20be-4cb1-9c72-d7a52d54a578-kube-api-access-846p5\") pod \"router-default-5444994796-q87qm\" (UID: \"39d5f083-20be-4cb1-9c72-d7a52d54a578\") " pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.809575 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t6w8r" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.811103 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kn82k"] Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.818702 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h8kb\" (UniqueName: \"kubernetes.io/projected/ad7721d8-a99d-4aa4-a3d3-d6c42f813514-kube-api-access-7h8kb\") pod \"machine-config-controller-84d6567774-twjvl\" (UID: \"ad7721d8-a99d-4aa4-a3d3-d6c42f813514\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.822513 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w4klh" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.830821 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bxgj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.840852 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4jld\" (UniqueName: \"kubernetes.io/projected/3d699c1d-88f3-4375-9400-644dfd53edae-kube-api-access-n4jld\") pod \"package-server-manager-789f6589d5-bj99s\" (UID: \"3d699c1d-88f3-4375-9400-644dfd53edae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bj99s" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.843852 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:33 crc kubenswrapper[4925]: E0202 10:59:33.844571 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:34.344556004 +0000 UTC m=+151.348804966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.845373 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.861383 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.861906 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6n4g\" (UniqueName: \"kubernetes.io/projected/3a169060-1fa6-45cc-9c6a-61f3f74ddd0b-kube-api-access-s6n4g\") pod \"catalog-operator-68c6474976-9cfdj\" (UID: \"3a169060-1fa6-45cc-9c6a-61f3f74ddd0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.873034 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.880364 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t77x8\" (UniqueName: \"kubernetes.io/projected/c389a4ac-ad09-48a6-8f22-716afcca72b1-kube-api-access-t77x8\") pod \"ingress-operator-5b745b69d9-m5lvb\" (UID: \"c389a4ac-ad09-48a6-8f22-716afcca72b1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.881827 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bj99s" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.896545 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghpq7" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.896952 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sch2v" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.901548 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fsn6h"] Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.903540 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34adde6a-61e7-4b44-9a72-72972f734a3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s4vpj\" (UID: \"34adde6a-61e7-4b44-9a72-72972f734a3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s4vpj" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.913655 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbbdl" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.923500 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5wlx\" (UniqueName: \"kubernetes.io/projected/76d76de9-7fea-489e-9a4a-9498ac01041a-kube-api-access-f5wlx\") pod \"packageserver-d55dfcdfc-v6b8t\" (UID: \"76d76de9-7fea-489e-9a4a-9498ac01041a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.928955 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.946837 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:33 crc kubenswrapper[4925]: E0202 10:59:33.947202 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:34.447191261 +0000 UTC m=+151.451440223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.951296 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9np62\" (UniqueName: \"kubernetes.io/projected/caf57c65-243f-462a-ac93-83d3740c4287-kube-api-access-9np62\") pod \"multus-admission-controller-857f4d67dd-zbmjc\" (UID: \"caf57c65-243f-462a-ac93-83d3740c4287\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zbmjc" Feb 02 10:59:33 crc kubenswrapper[4925]: I0202 10:59:33.985092 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nnqww"] Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.049153 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:34 crc kubenswrapper[4925]: E0202 10:59:34.049304 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:34.549274594 +0000 UTC m=+151.553523556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.049746 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:34 crc kubenswrapper[4925]: E0202 10:59:34.050062 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:34.550054305 +0000 UTC m=+151.554303267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.116323 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.138539 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s4vpj" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.151529 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:34 crc kubenswrapper[4925]: E0202 10:59:34.151905 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:34.651888891 +0000 UTC m=+151.656137853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.153215 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.166867 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zbmjc" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.205792 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.254214 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:34 crc kubenswrapper[4925]: E0202 10:59:34.254553 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:34.754530978 +0000 UTC m=+151.758779940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.280588 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-l76bs"] Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.308409 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jhw58"] Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.355917 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:34 crc kubenswrapper[4925]: E0202 10:59:34.356371 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:34.856354294 +0000 UTC m=+151.860603246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.413275 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d7hhc"] Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.456917 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:34 crc kubenswrapper[4925]: E0202 10:59:34.457298 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:34.957281716 +0000 UTC m=+151.961530678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.561523 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:34 crc kubenswrapper[4925]: E0202 10:59:34.561831 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:35.061815044 +0000 UTC m=+152.066064006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.563175 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w4klh"] Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.574708 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" event={"ID":"0896c0b8-88f0-41d3-a630-9098a1bf6be7","Type":"ContainerStarted","Data":"821ac78a63efb5e7992721068a94cd530af3e0bbde535f2aa022a28bc6536d72"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.580968 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t6w8r"] Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.600682 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx"] Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.607979 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" event={"ID":"069939c4-f5e9-4dc7-891a-018a475a4871","Type":"ContainerStarted","Data":"c65e3ce2ecbdb8c5495c1d367b5e6518d101ae3b28d0fbaa9fa63d954d3287ba"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.608019 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" event={"ID":"069939c4-f5e9-4dc7-891a-018a475a4871","Type":"ContainerStarted","Data":"0e713a8a3df136479d8a5452737d83c437cf3955c8bb06e2d42ad13ed87766d4"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.616234 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" event={"ID":"45405c2c-780c-4190-8cad-466ecfd84d2d","Type":"ContainerStarted","Data":"3dafaac12b0282e867fb9421eb215c2f10db3cc404b77ef3f8f080c3d1898652"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.617345 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.619260 4925 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-bf6lk container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.619292 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" podUID="45405c2c-780c-4190-8cad-466ecfd84d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.624069 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz" event={"ID":"7babde6f-c3db-4e20-9871-e2b8da06c334","Type":"ContainerStarted","Data":"60176d7393375eebf8073fc79bd0eac1ef3dff69901ca0405357d0c8c97f5af1"} Feb 02 10:59:34 crc kubenswrapper[4925]: W0202 10:59:34.631447 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc198ab6e_61b2_4041_8c36_daa58cdc0c9c.slice/crio-bacb4add53a956a013043993118532851eb80caaf288037e693f9adcd992f8ff WatchSource:0}: Error finding container bacb4add53a956a013043993118532851eb80caaf288037e693f9adcd992f8ff: Status 404 returned error can't find the container with id bacb4add53a956a013043993118532851eb80caaf288037e693f9adcd992f8ff Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.650752 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl"] Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.664265 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:34 crc kubenswrapper[4925]: E0202 10:59:34.666439 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:35.166427593 +0000 UTC m=+152.170676635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.691280 4925 generic.go:334] "Generic (PLEG): container finished" podID="e9fea0fb-a1ca-4ab8-a1fc-92673a76105e" containerID="5da3bb82eea6eca0ad10e8d2efc14e95c79d7e812428e5d7646e0e9757117f05" exitCode=0 Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.695345 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fsn6h" event={"ID":"3a69a2f5-7410-4d20-8ced-74a165eb1e2e","Type":"ContainerStarted","Data":"5bf5aa3dfa0b90982888349fe274a9b73c0ca3c7205faafbc574111547c035b7"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.695406 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jhw58" event={"ID":"f6f710ee-2823-4865-890e-1506e7eca156","Type":"ContainerStarted","Data":"da5a50a623227a3d6bcbd5a8f6cde9e6f5024ad827c13fed0b39b50ca6caba1f"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.695501 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" event={"ID":"e9fea0fb-a1ca-4ab8-a1fc-92673a76105e","Type":"ContainerDied","Data":"5da3bb82eea6eca0ad10e8d2efc14e95c79d7e812428e5d7646e0e9757117f05"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.697057 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" event={"ID":"686d6cf9-761e-4394-ab8c-316841705a26","Type":"ContainerStarted","Data":"891d6ea319b9c9548c0c24ee453b664becb4b4835d28b05942a9f4edaafd1778"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.697735 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.698810 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nzwbr" event={"ID":"81af45ef-2049-4155-9c0b-ae722e6b8c8a","Type":"ContainerStarted","Data":"2609d605418d80b3337b32ceee622cfd9571e3ff6c7a487ad57a7a8ceb855922"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.700204 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-b55lq" event={"ID":"670ef54d-fb71-49c9-930b-cae1088d828d","Type":"ContainerStarted","Data":"a7df88827863b6c00476637fb1a6ab2e2a9a3b7bf148b7dd1e572cb8bbae42c4"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.700562 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-b55lq" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.702662 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bxgj"] Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.703187 4925 patch_prober.go:28] interesting pod/console-operator-58897d9998-b55lq container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.703281 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-b55lq" podUID="670ef54d-fb71-49c9-930b-cae1088d828d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.704471 4925 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7gsrw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.704500 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" podUID="686d6cf9-761e-4394-ab8c-316841705a26" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.706787 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" event={"ID":"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b","Type":"ContainerStarted","Data":"fb2b5547398b092739c59e0fe7485886652823344eba8c3729cc87ae46809d0f"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.708082 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.711982 4925 generic.go:334] "Generic (PLEG): container finished" podID="0542f8d3-1555-4a7c-9c54-e3c075841559" containerID="3cd0c1013ee9a375171eee2ad34f020c275f14c15bb2427e73487dbd07fc3e6c" exitCode=0 Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.712348 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" event={"ID":"0542f8d3-1555-4a7c-9c54-e3c075841559","Type":"ContainerDied","Data":"3cd0c1013ee9a375171eee2ad34f020c275f14c15bb2427e73487dbd07fc3e6c"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.713543 4925 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mpgcb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.713598 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" podUID="124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.715499 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xkk7n" event={"ID":"8af1feeb-ac81-4603-9e99-c06de71038f0","Type":"ContainerStarted","Data":"2629f7f1e30bffce6d1925cbaeae8f23dde99bb269e7731e025baf086a093b86"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.721646 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" event={"ID":"5da7ca31-35e0-47b3-a877-63d50ed68d70","Type":"ContainerStarted","Data":"ceb433d1d810dd01b878c6776bf314b4658f98dc194917a6c88205c08f154c59"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.723317 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nnqww" event={"ID":"5cda6996-5995-4c69-888f-3a8838e992d9","Type":"ContainerStarted","Data":"e2b170fb528956ad97197888b269cccb0867a2a5bf56bb7d24d224b88f1cd610"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.731003 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5wtgq" event={"ID":"0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d","Type":"ContainerStarted","Data":"645aa25992c0d0a776837df45e6b0f603dfaa7a80976ae14b97ca572bc8e97a8"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.731068 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5wtgq" event={"ID":"0bcbcbcb-e19a-42dc-bc1f-01ce9f36432d","Type":"ContainerStarted","Data":"8e438b58bb964a5eb6124c31a7bf191fc038279fe1c8037d155a94fca71f6e77"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.742683 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6bfv4" event={"ID":"e04827b9-eb5b-4326-a0d3-297e3fec4eef","Type":"ContainerStarted","Data":"15ecc65a3ec4b9a38ffe8e4a8c522da797b5317d2e094b2938c97ecbfb3d22cc"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.757681 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" event={"ID":"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4","Type":"ContainerStarted","Data":"972e8c6a4c6fe6201938c6548ce23dfa7b9bfb9749a084d652f03b29b34a50c6"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.759178 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.764935 4925 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kn82k container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.766555 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" podUID="c8e6ecfa-3855-4fee-890a-2a88f84dc8a4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.765769 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qx9mv" event={"ID":"2c1d6c8a-41c7-48a0-853c-d1df60efb422","Type":"ContainerStarted","Data":"e68bf8e6f536c97947cbda8e5a7ef730f57fec1440f8065be063b54a6ca26563"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.767376 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-qx9mv" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.769056 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:34 crc kubenswrapper[4925]: E0202 10:59:34.769711 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:35.269690557 +0000 UTC m=+152.273939519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.778496 4925 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx9mv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.781223 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qx9mv" podUID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.788024 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-l76bs" event={"ID":"8b008e52-e741-4db5-9b52-5aaef21ff009","Type":"ContainerStarted","Data":"796505d412a6ebae2f060e9267362b432cfd66e5405b5b0444e12bfdc1faadc8"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.794821 4925 generic.go:334] "Generic (PLEG): container finished" podID="628e72ad-1a83-4e42-a5bd-3ab0c710993e" containerID="2a666f1cf9497476ba492753b250afa47e2caf40cb35e1c183e2a4c44d84277c" exitCode=0 Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.794892 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-68qpw" event={"ID":"628e72ad-1a83-4e42-a5bd-3ab0c710993e","Type":"ContainerDied","Data":"2a666f1cf9497476ba492753b250afa47e2caf40cb35e1c183e2a4c44d84277c"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.798634 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nzfrn" event={"ID":"a284e563-4e19-4602-8475-282ed1c71e23","Type":"ContainerStarted","Data":"632704e62d739c0f8ab4fcfa808bacf7db6d7e34a8af1794868126ae50f2f09d"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.809037 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-q87qm" event={"ID":"39d5f083-20be-4cb1-9c72-d7a52d54a578","Type":"ContainerStarted","Data":"4b264e5bf758b2c622c07b38d1caa5aecd7c85457c2737332d33be82d560f4c6"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.820024 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghpq7"] Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.827919 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" event={"ID":"27e6145f-5037-4a7c-99dc-4c4abcede9e2","Type":"ContainerStarted","Data":"1101ce305c7824eada3eb1055ed264e24194ea5f8b277f7062a23ab0d64ffdb9"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.844623 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5cv2" event={"ID":"dc71a086-3e99-48d7-99d8-5a08c0425e16","Type":"ContainerStarted","Data":"f31c7e6a70de12f1af0d464d2a1ed57a83dcabd537f405e34e21f406d7814306"} Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.847570 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cbbdl"] Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.853257 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bj99s"] Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.866805 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.869068 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.869128 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.872535 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.873433 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf"] Feb 02 10:59:34 crc kubenswrapper[4925]: E0202 10:59:34.875884 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:35.374760909 +0000 UTC m=+152.379009961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.876529 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq"] Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.888049 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-sch2v"] Feb 02 10:59:34 crc kubenswrapper[4925]: W0202 10:59:34.898284 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaa7c8d8_41fd_4212_b022_6939cc27d1d3.slice/crio-00298f561029a1b2fda00b17485eb68b702df9c3f59e290ba7e4768be3e872eb WatchSource:0}: Error finding container 00298f561029a1b2fda00b17485eb68b702df9c3f59e290ba7e4768be3e872eb: Status 404 returned error can't find the container with id 00298f561029a1b2fda00b17485eb68b702df9c3f59e290ba7e4768be3e872eb Feb 02 10:59:34 crc kubenswrapper[4925]: W0202 10:59:34.917931 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0708251f_5f32_4341_9813_9f3e4f19b5e1.slice/crio-6fcc9bef200cc27957b357882f8c518f69c14f5dd5d2c6149d9b845215e0877f WatchSource:0}: Error finding container 6fcc9bef200cc27957b357882f8c518f69c14f5dd5d2c6149d9b845215e0877f: Status 404 returned error can't find the container with id 6fcc9bef200cc27957b357882f8c518f69c14f5dd5d2c6149d9b845215e0877f Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.937018 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj"] Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.967945 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s4vpj"] Feb 02 10:59:34 crc kubenswrapper[4925]: I0202 10:59:34.976032 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:34 crc kubenswrapper[4925]: E0202 10:59:34.976438 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:35.47641727 +0000 UTC m=+152.480666242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:35 crc kubenswrapper[4925]: W0202 10:59:35.001134 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a169060_1fa6_45cc_9c6a_61f3f74ddd0b.slice/crio-c4066ac3550f3765836285475db963a9c8aafc60c4ca18e9f7c759c45331067e WatchSource:0}: Error finding container c4066ac3550f3765836285475db963a9c8aafc60c4ca18e9f7c759c45331067e: Status 404 returned error can't find the container with id c4066ac3550f3765836285475db963a9c8aafc60c4ca18e9f7c759c45331067e Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.059463 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb"] Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.101218 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:35 crc kubenswrapper[4925]: E0202 10:59:35.101632 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:35.601615492 +0000 UTC m=+152.605864454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.101978 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zbmjc"] Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.108491 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t"] Feb 02 10:59:35 crc kubenswrapper[4925]: W0202 10:59:35.149856 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc389a4ac_ad09_48a6_8f22_716afcca72b1.slice/crio-c17c5696d02eb98b24beb65144e63db095aa31debccadf2eebe857d7bee2f47e WatchSource:0}: Error finding container c17c5696d02eb98b24beb65144e63db095aa31debccadf2eebe857d7bee2f47e: Status 404 returned error can't find the container with id c17c5696d02eb98b24beb65144e63db095aa31debccadf2eebe857d7bee2f47e Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.208932 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:35 crc kubenswrapper[4925]: E0202 10:59:35.209886 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:35.709868027 +0000 UTC m=+152.714116999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.312817 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:35 crc kubenswrapper[4925]: E0202 10:59:35.313205 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:35.813187032 +0000 UTC m=+152.817436064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.414834 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:35 crc kubenswrapper[4925]: E0202 10:59:35.415558 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:35.915531092 +0000 UTC m=+152.919780054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.505462 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-5wtgq" podStartSLOduration=5.505440684 podStartE2EDuration="5.505440684s" podCreationTimestamp="2026-02-02 10:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:35.50151783 +0000 UTC m=+152.505766812" watchObservedRunningTime="2026-02-02 10:59:35.505440684 +0000 UTC m=+152.509689646" Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.517249 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:35 crc kubenswrapper[4925]: E0202 10:59:35.517618 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:36.017605725 +0000 UTC m=+153.021854687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.543487 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" podStartSLOduration=124.543469927 podStartE2EDuration="2m4.543469927s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:35.542590314 +0000 UTC m=+152.546839276" watchObservedRunningTime="2026-02-02 10:59:35.543469927 +0000 UTC m=+152.547718889" Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.619455 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:35 crc kubenswrapper[4925]: E0202 10:59:35.620004 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:36.119987895 +0000 UTC m=+153.124236857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.723574 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:35 crc kubenswrapper[4925]: E0202 10:59:35.724135 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:36.224122762 +0000 UTC m=+153.228371724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.747013 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-v7bdd" podStartSLOduration=124.746972785 podStartE2EDuration="2m4.746972785s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:35.739353964 +0000 UTC m=+152.743602936" watchObservedRunningTime="2026-02-02 10:59:35.746972785 +0000 UTC m=+152.751221747" Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.825440 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:35 crc kubenswrapper[4925]: E0202 10:59:35.826636 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:36.326613746 +0000 UTC m=+153.330862708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.858866 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" event={"ID":"e9fea0fb-a1ca-4ab8-a1fc-92673a76105e","Type":"ContainerStarted","Data":"a9d58ce5464a6e31e71c37f395cbf518def29fe3acb66721e47c9cc3e8c83628"} Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.858953 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.866358 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.866421 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.869049 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6bfv4" podStartSLOduration=124.869013894 podStartE2EDuration="2m4.869013894s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:35.857166951 +0000 UTC m=+152.861415913" watchObservedRunningTime="2026-02-02 10:59:35.869013894 +0000 UTC m=+152.873262866" Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.874865 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sch2v" event={"ID":"404fe0cb-7979-4e18-8b34-24c961c0584b","Type":"ContainerStarted","Data":"770b2662c0201f3ee94431221aebc4672886b841cff1ba3591c8bdb79298c61a"} Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.894064 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" event={"ID":"5a6f28ef-1382-427a-a202-8f8559f74f94","Type":"ContainerStarted","Data":"57af21a615a4430e75e846d999c9ca9ae0f358a88647066ae1f98e96e53c6eae"} Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.894143 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" event={"ID":"5a6f28ef-1382-427a-a202-8f8559f74f94","Type":"ContainerStarted","Data":"74b60e0a3ebd9ccf40cb529f6631e7325079abfbe3b5fb30575c5ff8f9c5081f"} Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.901126 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bxgj" event={"ID":"76717f70-4bab-41ce-aa51-a0e4a634c248","Type":"ContainerStarted","Data":"6a0e774eb7e55dcaca81cf5f29cbf0070e10d15120a8487a0f9b9040c41d46b0"} Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.901672 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bxgj" event={"ID":"76717f70-4bab-41ce-aa51-a0e4a634c248","Type":"ContainerStarted","Data":"13df37ded4789aaad615a7e7dd4421717e4d6e06171eccaa6d799f39038bfe72"} Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.911814 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-q87qm" event={"ID":"39d5f083-20be-4cb1-9c72-d7a52d54a578","Type":"ContainerStarted","Data":"c9e123901cacd55126621eee197b135a47237644432952b1d48ab5c44a6c25d0"} Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.912589 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" podStartSLOduration=124.912575163 podStartE2EDuration="2m4.912575163s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:35.9109509 +0000 UTC m=+152.915199862" watchObservedRunningTime="2026-02-02 10:59:35.912575163 +0000 UTC m=+152.916824135" Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.929295 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:35 crc kubenswrapper[4925]: E0202 10:59:35.930041 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:36.430030094 +0000 UTC m=+153.434279056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.935345 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-q87qm" podStartSLOduration=124.935324753 podStartE2EDuration="2m4.935324753s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:35.933163906 +0000 UTC m=+152.937412868" watchObservedRunningTime="2026-02-02 10:59:35.935324753 +0000 UTC m=+152.939573715" Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.948314 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" event={"ID":"5da7ca31-35e0-47b3-a877-63d50ed68d70","Type":"ContainerStarted","Data":"ade15a44e7ee4b9e4fa1640fac566d0137e8b81dc20757b48cdf7ac6b0b8e2d3"} Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.972980 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5cv2" podStartSLOduration=124.972960106 podStartE2EDuration="2m4.972960106s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:35.972068462 +0000 UTC m=+152.976317454" watchObservedRunningTime="2026-02-02 10:59:35.972960106 +0000 UTC m=+152.977209068" Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.986384 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s4vpj" event={"ID":"34adde6a-61e7-4b44-9a72-72972f734a3c","Type":"ContainerStarted","Data":"ac9fcd0395724f398f55c17889ffc3b32ec0abac98d91a8b8984dc1e3cec3db2"} Feb 02 10:59:35 crc kubenswrapper[4925]: I0202 10:59:35.992961 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zbmjc" event={"ID":"caf57c65-243f-462a-ac93-83d3740c4287","Type":"ContainerStarted","Data":"b6d5cd44b72fdacbb80d2416e6ec412b9e7301a124266f04ec0681ea58aca0c4"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.016315 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t6w8r" event={"ID":"a6100093-1950-472d-a74f-f0aac942416b","Type":"ContainerStarted","Data":"3dc24f6f66e93d31d564b85185ffe17dce7b10a0a4cd425d7932350e01d3fc63"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.016376 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t6w8r" event={"ID":"a6100093-1950-472d-a74f-f0aac942416b","Type":"ContainerStarted","Data":"1a91f566b9aa454421ddc7c425acfb13b5d97fb44e75a6a6d53103b93f851755"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.030571 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:36 crc kubenswrapper[4925]: E0202 10:59:36.031819 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:36.531797828 +0000 UTC m=+153.536046790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.034469 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" event={"ID":"27e6145f-5037-4a7c-99dc-4c4abcede9e2","Type":"ContainerStarted","Data":"303d7061155433a83290d2f4a6d504a8acc23f3eb7340a423e744f9ca28ecb2f"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.034776 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xkk7n" podStartSLOduration=125.034762286 podStartE2EDuration="2m5.034762286s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.019648957 +0000 UTC m=+153.023897919" watchObservedRunningTime="2026-02-02 10:59:36.034762286 +0000 UTC m=+153.039011248" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.063910 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbbdl" event={"ID":"aaa7c8d8-41fd-4212-b022-6939cc27d1d3","Type":"ContainerStarted","Data":"c0ca263452fbae8f90b57b6e10c1403ed60d9e3728ab6c6ec9e36fb7490af6b9"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.063999 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbbdl" event={"ID":"aaa7c8d8-41fd-4212-b022-6939cc27d1d3","Type":"ContainerStarted","Data":"00298f561029a1b2fda00b17485eb68b702df9c3f59e290ba7e4768be3e872eb"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.077527 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" event={"ID":"069939c4-f5e9-4dc7-891a-018a475a4871","Type":"ContainerStarted","Data":"9273f29acea7b31e052299380d91eb2da73bbdf21533aaf73b7c38ac03912414"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.081095 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fsn6h" event={"ID":"3a69a2f5-7410-4d20-8ced-74a165eb1e2e","Type":"ContainerStarted","Data":"2c8cfb5648e519281fc31ca73c082c7d79c7d570920bbb36c792e8d0b15421ee"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.093521 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-68qpw" event={"ID":"628e72ad-1a83-4e42-a5bd-3ab0c710993e","Type":"ContainerStarted","Data":"0616aab0ba969e9e21a9a52e25b3b0ced6bf6fc58c882e7ce870b1deaad5c840"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.110238 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghpq7" event={"ID":"693d8818-a349-4e21-80cd-26caca3271b5","Type":"ContainerStarted","Data":"cf64f0904b12767361c82416a3e55b9be96f48b76a63d425824b24255e0565a7"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.116327 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" event={"ID":"0708251f-5f32-4341-9813-9f3e4f19b5e1","Type":"ContainerStarted","Data":"6fcc9bef200cc27957b357882f8c518f69c14f5dd5d2c6149d9b845215e0877f"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.116615 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.119376 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" event={"ID":"3a87278a-c899-40df-99ef-324a5415be60","Type":"ContainerStarted","Data":"1c2e4a77dbc7f61e608f64e191cf8dab8fe2d8a4a8eed742bef5af1f476b63eb"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.119413 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" event={"ID":"3a87278a-c899-40df-99ef-324a5415be60","Type":"ContainerStarted","Data":"2830db1531ef7294234f0ff6c218788a895cffa4e891135127fd25cc05a020a0"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.122023 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bj99s" event={"ID":"3d699c1d-88f3-4375-9400-644dfd53edae","Type":"ContainerStarted","Data":"d5871734ac2f057359c276b1fdee1b73d7f740b150dae639f727f89ada69da33"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.122077 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bj99s" event={"ID":"3d699c1d-88f3-4375-9400-644dfd53edae","Type":"ContainerStarted","Data":"c0e011e1869fba54d18ec1094f4a630d129b4684ddd56d6ea795d2c7dcaf40f5"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.127051 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w4klh" event={"ID":"c198ab6e-61b2-4041-8c36-daa58cdc0c9c","Type":"ContainerStarted","Data":"96eaba4a45e64fa8dceb6977d6eaf17f24fa54b10100c1d236cc92b4e2cacec9"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.127206 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w4klh" event={"ID":"c198ab6e-61b2-4041-8c36-daa58cdc0c9c","Type":"ContainerStarted","Data":"bacb4add53a956a013043993118532851eb80caaf288037e693f9adcd992f8ff"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.134163 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.137691 4925 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sz8zf container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.137770 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" podUID="0708251f-5f32-4341-9813-9f3e4f19b5e1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 02 10:59:36 crc kubenswrapper[4925]: E0202 10:59:36.140754 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:36.640401743 +0000 UTC m=+153.644650905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.143759 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" podStartSLOduration=125.143732731 podStartE2EDuration="2m5.143732731s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.135263677 +0000 UTC m=+153.139512649" watchObservedRunningTime="2026-02-02 10:59:36.143732731 +0000 UTC m=+153.147981683" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.148919 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" event={"ID":"c389a4ac-ad09-48a6-8f22-716afcca72b1","Type":"ContainerStarted","Data":"c17c5696d02eb98b24beb65144e63db095aa31debccadf2eebe857d7bee2f47e"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.155632 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl" event={"ID":"ad7721d8-a99d-4aa4-a3d3-d6c42f813514","Type":"ContainerStarted","Data":"65d560c78440b9691b432af6b32d64e983b79e6107f92c51175b4db7d05cfcc0"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.155683 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl" event={"ID":"ad7721d8-a99d-4aa4-a3d3-d6c42f813514","Type":"ContainerStarted","Data":"f80c020372d765af1a25cde3395e0ba09a745a157fdb0ee4832ae354906d265e"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.218412 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" event={"ID":"3a169060-1fa6-45cc-9c6a-61f3f74ddd0b","Type":"ContainerStarted","Data":"a914f45c11ee68c04d159ccce6cf2f4d07f04a9f724c4132d70bce306f05eed8"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.218481 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" event={"ID":"3a169060-1fa6-45cc-9c6a-61f3f74ddd0b","Type":"ContainerStarted","Data":"c4066ac3550f3765836285475db963a9c8aafc60c4ca18e9f7c759c45331067e"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.218900 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.236559 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-l76bs" event={"ID":"8b008e52-e741-4db5-9b52-5aaef21ff009","Type":"ContainerStarted","Data":"81ffd7f082a9c1731857b3d59bcef9a3870e175c5335676e4c5ecfae4db89105"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.236952 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:36 crc kubenswrapper[4925]: E0202 10:59:36.237597 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:36.737566736 +0000 UTC m=+153.741815698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.240351 4925 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9cfdj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.240466 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" podUID="3a169060-1fa6-45cc-9c6a-61f3f74ddd0b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.242045 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nnqww" event={"ID":"5cda6996-5995-4c69-888f-3a8838e992d9","Type":"ContainerStarted","Data":"21966324f343cbd9d88dd27b2d9e8b560fc35685dae311347a40bc8f4dabf006"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.242744 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-nnqww" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.249688 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" event={"ID":"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4","Type":"ContainerStarted","Data":"e1fbc146738b64aa9bb6292522b265aeb87055a14d0a10b63b07be753af3cd5a"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.251813 4925 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kn82k container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.252320 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" podUID="c8e6ecfa-3855-4fee-890a-2a88f84dc8a4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.255589 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nzfrn" podStartSLOduration=125.255575981 podStartE2EDuration="2m5.255575981s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.253234379 +0000 UTC m=+153.257483351" watchObservedRunningTime="2026-02-02 10:59:36.255575981 +0000 UTC m=+153.259824943" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.312210 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" event={"ID":"76d76de9-7fea-489e-9a4a-9498ac01041a","Type":"ContainerStarted","Data":"307fd71f08ec61717414fc98c8fe00b348efb01991ac2afe25e076bc545e894b"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.313627 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.329063 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nzfrn" event={"ID":"a284e563-4e19-4602-8475-282ed1c71e23","Type":"ContainerStarted","Data":"9457fc84b39275cae051a450653ece09cea709efbbf89debec54132db71d8300"} Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.329257 4925 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx9mv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.329353 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qx9mv" podUID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.329956 4925 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mpgcb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.330012 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" podUID="124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.330139 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9nlsz" podStartSLOduration=125.330122387 podStartE2EDuration="2m5.330122387s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.327987051 +0000 UTC m=+153.332236013" watchObservedRunningTime="2026-02-02 10:59:36.330122387 +0000 UTC m=+153.334371349" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.330316 4925 patch_prober.go:28] interesting pod/console-operator-58897d9998-b55lq container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.330342 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-b55lq" podUID="670ef54d-fb71-49c9-930b-cae1088d828d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.330383 4925 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-v6b8t container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.330406 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" podUID="76d76de9-7fea-489e-9a4a-9498ac01041a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.358783 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:36 crc kubenswrapper[4925]: E0202 10:59:36.360826 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:36.860815267 +0000 UTC m=+153.865064229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.377400 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-b55lq" podStartSLOduration=125.377380314 podStartE2EDuration="2m5.377380314s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.37459131 +0000 UTC m=+153.378840272" watchObservedRunningTime="2026-02-02 10:59:36.377380314 +0000 UTC m=+153.381629276" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.402339 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-nzwbr" podStartSLOduration=125.402316261 podStartE2EDuration="2m5.402316261s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.399376864 +0000 UTC m=+153.403625826" watchObservedRunningTime="2026-02-02 10:59:36.402316261 +0000 UTC m=+153.406565223" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.455478 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-qx9mv" podStartSLOduration=125.455451063 podStartE2EDuration="2m5.455451063s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.45345874 +0000 UTC m=+153.457707712" watchObservedRunningTime="2026-02-02 10:59:36.455451063 +0000 UTC m=+153.459700025" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.460448 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:36 crc kubenswrapper[4925]: E0202 10:59:36.460732 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:36.960689501 +0000 UTC m=+153.964938463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.500695 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" podStartSLOduration=125.500668816 podStartE2EDuration="2m5.500668816s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.497583364 +0000 UTC m=+153.501832326" watchObservedRunningTime="2026-02-02 10:59:36.500668816 +0000 UTC m=+153.504917778" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.537702 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-fsn6h" podStartSLOduration=125.537683862 podStartE2EDuration="2m5.537683862s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.53645528 +0000 UTC m=+153.540704252" watchObservedRunningTime="2026-02-02 10:59:36.537683862 +0000 UTC m=+153.541932824" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.562863 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:36 crc kubenswrapper[4925]: E0202 10:59:36.563196 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:37.063181285 +0000 UTC m=+154.067430247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.579895 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-l76bs" podStartSLOduration=6.579874635 podStartE2EDuration="6.579874635s" podCreationTimestamp="2026-02-02 10:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.576067965 +0000 UTC m=+153.580316927" watchObservedRunningTime="2026-02-02 10:59:36.579874635 +0000 UTC m=+153.584123597" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.618026 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6bxgj" podStartSLOduration=125.615727661 podStartE2EDuration="2m5.615727661s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.613578974 +0000 UTC m=+153.617827956" watchObservedRunningTime="2026-02-02 10:59:36.615727661 +0000 UTC m=+153.622250883" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.657365 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-nnqww" podStartSLOduration=6.657345999 podStartE2EDuration="6.657345999s" podCreationTimestamp="2026-02-02 10:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.654178565 +0000 UTC m=+153.658427537" watchObservedRunningTime="2026-02-02 10:59:36.657345999 +0000 UTC m=+153.661594961" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.664526 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:36 crc kubenswrapper[4925]: E0202 10:59:36.664746 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:37.164711213 +0000 UTC m=+154.168960185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.664904 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:36 crc kubenswrapper[4925]: E0202 10:59:36.665365 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:37.16534549 +0000 UTC m=+154.169594452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.695706 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" podStartSLOduration=125.69568732 podStartE2EDuration="2m5.69568732s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.695133225 +0000 UTC m=+153.699382197" watchObservedRunningTime="2026-02-02 10:59:36.69568732 +0000 UTC m=+153.699936282" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.736162 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w4klh" podStartSLOduration=125.736142737 podStartE2EDuration="2m5.736142737s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.73586698 +0000 UTC m=+153.740115942" watchObservedRunningTime="2026-02-02 10:59:36.736142737 +0000 UTC m=+153.740391699" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.766495 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:36 crc kubenswrapper[4925]: E0202 10:59:36.766690 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:37.266667572 +0000 UTC m=+154.270916534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.766944 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:36 crc kubenswrapper[4925]: E0202 10:59:36.767266 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:37.267253408 +0000 UTC m=+154.271502360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.777153 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" podStartSLOduration=125.777137878 podStartE2EDuration="2m5.777137878s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.776644215 +0000 UTC m=+153.780893177" watchObservedRunningTime="2026-02-02 10:59:36.777137878 +0000 UTC m=+153.781386840" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.818775 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" podStartSLOduration=125.818758476 podStartE2EDuration="2m5.818758476s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.818183211 +0000 UTC m=+153.822432173" watchObservedRunningTime="2026-02-02 10:59:36.818758476 +0000 UTC m=+153.823007438" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.851029 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldjvb" podStartSLOduration=125.851013327 podStartE2EDuration="2m5.851013327s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.850301508 +0000 UTC m=+153.854550470" watchObservedRunningTime="2026-02-02 10:59:36.851013327 +0000 UTC m=+153.855262289" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.866290 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:59:36 crc kubenswrapper[4925]: [-]has-synced failed: reason withheld Feb 02 10:59:36 crc kubenswrapper[4925]: [+]process-running ok Feb 02 10:59:36 crc kubenswrapper[4925]: healthz check failed Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.866349 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.867652 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:36 crc kubenswrapper[4925]: E0202 10:59:36.867823 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:37.36779525 +0000 UTC m=+154.372044212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.867957 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:36 crc kubenswrapper[4925]: E0202 10:59:36.868291 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:37.368278592 +0000 UTC m=+154.372527554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.891272 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-cmf26" podStartSLOduration=125.891254909 podStartE2EDuration="2m5.891254909s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.890138209 +0000 UTC m=+153.894387171" watchObservedRunningTime="2026-02-02 10:59:36.891254909 +0000 UTC m=+153.895503871" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.938878 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-d7hhc" podStartSLOduration=125.938861784 podStartE2EDuration="2m5.938861784s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.937404446 +0000 UTC m=+153.941653478" watchObservedRunningTime="2026-02-02 10:59:36.938861784 +0000 UTC m=+153.943110746" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.970271 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:36 crc kubenswrapper[4925]: E0202 10:59:36.970454 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:37.470421567 +0000 UTC m=+154.474670529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.970630 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:36 crc kubenswrapper[4925]: E0202 10:59:36.970968 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:37.470953111 +0000 UTC m=+154.475202143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.981433 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 10:59:36 crc kubenswrapper[4925]: I0202 10:59:36.981782 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" podStartSLOduration=125.981762626 podStartE2EDuration="2m5.981762626s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:36.980434611 +0000 UTC m=+153.984683573" watchObservedRunningTime="2026-02-02 10:59:36.981762626 +0000 UTC m=+153.986011588" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.015053 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" podStartSLOduration=126.015036894 podStartE2EDuration="2m6.015036894s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:37.013311878 +0000 UTC m=+154.017560840" watchObservedRunningTime="2026-02-02 10:59:37.015036894 +0000 UTC m=+154.019285856" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.071506 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:37 crc kubenswrapper[4925]: E0202 10:59:37.071911 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:37.571892873 +0000 UTC m=+154.576141835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.173132 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:37 crc kubenswrapper[4925]: E0202 10:59:37.173549 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:37.673529674 +0000 UTC m=+154.677778716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.206189 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.274672 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:37 crc kubenswrapper[4925]: E0202 10:59:37.275128 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:37.775096504 +0000 UTC m=+154.779345466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.331789 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sch2v" event={"ID":"404fe0cb-7979-4e18-8b34-24c961c0584b","Type":"ContainerStarted","Data":"9f28be0927d7172f5ff3ee5b123da54ae562fa756ccbf5a6401e97357d3033ab"} Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.333209 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zbmjc" event={"ID":"caf57c65-243f-462a-ac93-83d3740c4287","Type":"ContainerStarted","Data":"cba8d44814ce39e46b09eb77ab3ff627d81f6a6f75efac027ca5a748db91aede"} Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.333248 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zbmjc" event={"ID":"caf57c65-243f-462a-ac93-83d3740c4287","Type":"ContainerStarted","Data":"2ebacd68a51677548cea84912875ca9f81eedf680b06958865e1a564d5bb0cc3"} Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.334433 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s4vpj" event={"ID":"34adde6a-61e7-4b44-9a72-72972f734a3c","Type":"ContainerStarted","Data":"afdddca08b13acc0a0113e5084f020d430f5f89e0161e917816587ca70b07921"} Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.340145 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" event={"ID":"0542f8d3-1555-4a7c-9c54-e3c075841559","Type":"ContainerStarted","Data":"4e0456c7b9e95a00bd49c3a295f8cfed6a68ae5e12616a0654236a4835f7c623"} Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.344920 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl" event={"ID":"ad7721d8-a99d-4aa4-a3d3-d6c42f813514","Type":"ContainerStarted","Data":"1de5d0b18fc190d3e2ea3e00428ba3a099c168d5b92b4dbec49c4db18ad89ac3"} Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.347020 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" event={"ID":"76d76de9-7fea-489e-9a4a-9498ac01041a","Type":"ContainerStarted","Data":"88199dedf45df31f1bb1a4e47faadb53ebcad4fefba4c02a2e197ea5dfbb8256"} Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.347457 4925 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-v6b8t container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.347492 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" podUID="76d76de9-7fea-489e-9a4a-9498ac01041a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.349995 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbbdl" event={"ID":"aaa7c8d8-41fd-4212-b022-6939cc27d1d3","Type":"ContainerStarted","Data":"0b963560b9a615bc224da99dbf1d12a92f4ef2fcc99b3f1afb1d0cdeb3b623cf"} Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.353171 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-68qpw" event={"ID":"628e72ad-1a83-4e42-a5bd-3ab0c710993e","Type":"ContainerStarted","Data":"e24cefff7e91927a1fb6356438a7c2f93168551edc341fe8d6b075387b359dc6"} Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.355741 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sch2v" podStartSLOduration=126.35572759 podStartE2EDuration="2m6.35572759s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:37.355206547 +0000 UTC m=+154.359455509" watchObservedRunningTime="2026-02-02 10:59:37.35572759 +0000 UTC m=+154.359976552" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.361572 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bj99s" event={"ID":"3d699c1d-88f3-4375-9400-644dfd53edae","Type":"ContainerStarted","Data":"37207716ec2cebe3a38c21bf71fe8239d5d95a64250d9558f18b4120ce34a447"} Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.361802 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bj99s" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.367650 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" event={"ID":"c389a4ac-ad09-48a6-8f22-716afcca72b1","Type":"ContainerStarted","Data":"d5fedf5d07aedb9daa6fdd9fef3403ba1bf3a327859547b7e6b7398bc8f5a8f7"} Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.367854 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" event={"ID":"c389a4ac-ad09-48a6-8f22-716afcca72b1","Type":"ContainerStarted","Data":"8057ba2ebd6441e7a688c1116ce3751a8757e0ec404b859657daa7b635bfffa1"} Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.377907 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:37 crc kubenswrapper[4925]: E0202 10:59:37.378278 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:37.878265275 +0000 UTC m=+154.882514237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.381576 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghpq7" event={"ID":"693d8818-a349-4e21-80cd-26caca3271b5","Type":"ContainerStarted","Data":"b6a2d18168617c6b60630bd97716d33c059d6f22d1b0d674a2d4af6189e594d4"} Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.388696 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s4vpj" podStartSLOduration=126.38867641 podStartE2EDuration="2m6.38867641s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:37.382806165 +0000 UTC m=+154.387055127" watchObservedRunningTime="2026-02-02 10:59:37.38867641 +0000 UTC m=+154.392925372" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.392945 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" event={"ID":"5a6f28ef-1382-427a-a202-8f8559f74f94","Type":"ContainerStarted","Data":"e4ca142d77654eb9ecc5f1e7bf79ac4c1badc9e154c551492eb7ee33ac233f60"} Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.400847 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" event={"ID":"0708251f-5f32-4341-9813-9f3e4f19b5e1","Type":"ContainerStarted","Data":"a9e6f5329bdbceb7f9f2346c05eab5bcac55a8f1b668abe3ab4f109058f7b8ce"} Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.401664 4925 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sz8zf container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.401718 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" podUID="0708251f-5f32-4341-9813-9f3e4f19b5e1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.417431 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t6w8r" event={"ID":"a6100093-1950-472d-a74f-f0aac942416b","Type":"ContainerStarted","Data":"7bc0b71bf03aee83be76b3e705a38b1b0db009712d4a59fffb7c505884f8e1b8"} Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.422359 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.422626 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.423400 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" podStartSLOduration=126.423383815 podStartE2EDuration="2m6.423383815s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:37.421867535 +0000 UTC m=+154.426116497" watchObservedRunningTime="2026-02-02 10:59:37.423383815 +0000 UTC m=+154.427632777" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.430636 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nnqww" event={"ID":"5cda6996-5995-4c69-888f-3a8838e992d9","Type":"ContainerStarted","Data":"55295e61bdff2535ed99a42aa3b2d33464acc09ee7c635587897d8c0e3296e24"} Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.435209 4925 patch_prober.go:28] interesting pod/apiserver-76f77b778f-68qpw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.438748 4925 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9cfdj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.438789 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" podUID="3a169060-1fa6-45cc-9c6a-61f3f74ddd0b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.438791 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-68qpw" podUID="628e72ad-1a83-4e42-a5bd-3ab0c710993e" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.438947 4925 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kn82k container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.438980 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" podUID="c8e6ecfa-3855-4fee-890a-2a88f84dc8a4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.478497 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.479029 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbbdl" podStartSLOduration=126.479002712 podStartE2EDuration="2m6.479002712s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:37.465546187 +0000 UTC m=+154.469795149" watchObservedRunningTime="2026-02-02 10:59:37.479002712 +0000 UTC m=+154.483251674" Feb 02 10:59:37 crc kubenswrapper[4925]: E0202 10:59:37.480407 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:37.980375728 +0000 UTC m=+154.984624690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.497788 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-68qpw" podStartSLOduration=126.497766837 podStartE2EDuration="2m6.497766837s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:37.487381063 +0000 UTC m=+154.491630025" watchObservedRunningTime="2026-02-02 10:59:37.497766837 +0000 UTC m=+154.502015799" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.510643 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twjvl" podStartSLOduration=126.510611746 podStartE2EDuration="2m6.510611746s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:37.506447166 +0000 UTC m=+154.510696138" watchObservedRunningTime="2026-02-02 10:59:37.510611746 +0000 UTC m=+154.514860708" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.558603 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m5lvb" podStartSLOduration=126.558577131 podStartE2EDuration="2m6.558577131s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:37.55700876 +0000 UTC m=+154.561257722" watchObservedRunningTime="2026-02-02 10:59:37.558577131 +0000 UTC m=+154.562826093" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.559492 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-zbmjc" podStartSLOduration=126.559486845 podStartE2EDuration="2m6.559486845s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:37.534419364 +0000 UTC m=+154.538668326" watchObservedRunningTime="2026-02-02 10:59:37.559486845 +0000 UTC m=+154.563735807" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.581759 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.582963 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ghpq7" podStartSLOduration=126.582908463 podStartE2EDuration="2m6.582908463s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:37.578649911 +0000 UTC m=+154.582898883" watchObservedRunningTime="2026-02-02 10:59:37.582908463 +0000 UTC m=+154.587157425" Feb 02 10:59:37 crc kubenswrapper[4925]: E0202 10:59:37.583667 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:38.083623742 +0000 UTC m=+155.087872944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.597445 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bj99s" podStartSLOduration=126.597420026 podStartE2EDuration="2m6.597420026s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:37.596566073 +0000 UTC m=+154.600815035" watchObservedRunningTime="2026-02-02 10:59:37.597420026 +0000 UTC m=+154.601668988" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.621520 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-t6w8r" podStartSLOduration=126.621502151 podStartE2EDuration="2m6.621502151s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:37.621223094 +0000 UTC m=+154.625472056" watchObservedRunningTime="2026-02-02 10:59:37.621502151 +0000 UTC m=+154.625751113" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.683153 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:37 crc kubenswrapper[4925]: E0202 10:59:37.683476 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:38.183461295 +0000 UTC m=+155.187710257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.785134 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:37 crc kubenswrapper[4925]: E0202 10:59:37.785797 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:38.285783234 +0000 UTC m=+155.290032196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.866577 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:59:37 crc kubenswrapper[4925]: [-]has-synced failed: reason withheld Feb 02 10:59:37 crc kubenswrapper[4925]: [+]process-running ok Feb 02 10:59:37 crc kubenswrapper[4925]: healthz check failed Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.866626 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.886770 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:37 crc kubenswrapper[4925]: E0202 10:59:37.887130 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:38.387112387 +0000 UTC m=+155.391361349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:37 crc kubenswrapper[4925]: I0202 10:59:37.988030 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:37 crc kubenswrapper[4925]: E0202 10:59:37.988339 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:38.488326327 +0000 UTC m=+155.492575289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.089364 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:38 crc kubenswrapper[4925]: E0202 10:59:38.089715 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:38.58967089 +0000 UTC m=+155.593919852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.089821 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:38 crc kubenswrapper[4925]: E0202 10:59:38.090183 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:38.590167173 +0000 UTC m=+155.594416135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.191222 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:38 crc kubenswrapper[4925]: E0202 10:59:38.191636 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:38.69162051 +0000 UTC m=+155.695869472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.292919 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:38 crc kubenswrapper[4925]: E0202 10:59:38.293321 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:38.793310422 +0000 UTC m=+155.797559384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.394361 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:38 crc kubenswrapper[4925]: E0202 10:59:38.394626 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:38.894595224 +0000 UTC m=+155.898844196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.395002 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:38 crc kubenswrapper[4925]: E0202 10:59:38.395367 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:38.895353204 +0000 UTC m=+155.899602166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.431317 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jhw58" event={"ID":"f6f710ee-2823-4865-890e-1506e7eca156","Type":"ContainerStarted","Data":"89b36d8491ba2d0d4ccdaf2a0017c0efbe565ed55f34d9d37b6919515ae90056"} Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.433528 4925 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-v6b8t container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.433613 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" podUID="76d76de9-7fea-489e-9a4a-9498ac01041a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.435021 4925 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sz8zf container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.435057 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" podUID="0708251f-5f32-4341-9813-9f3e4f19b5e1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.437005 4925 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hlntp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.437012 4925 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hlntp container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.437054 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" podUID="e9fea0fb-a1ca-4ab8-a1fc-92673a76105e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.437098 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" podUID="e9fea0fb-a1ca-4ab8-a1fc-92673a76105e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.495739 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:38 crc kubenswrapper[4925]: E0202 10:59:38.496638 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:38.996606794 +0000 UTC m=+156.000855756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.597671 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:38 crc kubenswrapper[4925]: E0202 10:59:38.598100 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.098068321 +0000 UTC m=+156.102317283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.699024 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:38 crc kubenswrapper[4925]: E0202 10:59:38.699362 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.199346341 +0000 UTC m=+156.203595313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.800892 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:38 crc kubenswrapper[4925]: E0202 10:59:38.801271 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.30125942 +0000 UTC m=+156.305508382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.865915 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:59:38 crc kubenswrapper[4925]: [-]has-synced failed: reason withheld Feb 02 10:59:38 crc kubenswrapper[4925]: [+]process-running ok Feb 02 10:59:38 crc kubenswrapper[4925]: healthz check failed Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.865979 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.902446 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:38 crc kubenswrapper[4925]: E0202 10:59:38.902631 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.402599073 +0000 UTC m=+156.406848035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:38 crc kubenswrapper[4925]: I0202 10:59:38.902792 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:38 crc kubenswrapper[4925]: E0202 10:59:38.903182 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.403169558 +0000 UTC m=+156.407418520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.004354 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:39 crc kubenswrapper[4925]: E0202 10:59:39.004586 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.504558622 +0000 UTC m=+156.508807584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.004786 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:39 crc kubenswrapper[4925]: E0202 10:59:39.005148 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.505135757 +0000 UTC m=+156.509384719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.105591 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:39 crc kubenswrapper[4925]: E0202 10:59:39.105773 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.605746081 +0000 UTC m=+156.609995043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.105883 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:39 crc kubenswrapper[4925]: E0202 10:59:39.106274 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.606263035 +0000 UTC m=+156.610512047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.207319 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:39 crc kubenswrapper[4925]: E0202 10:59:39.207458 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.707439764 +0000 UTC m=+156.711688726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.207876 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:39 crc kubenswrapper[4925]: E0202 10:59:39.208186 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.708176353 +0000 UTC m=+156.712425315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.309254 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:39 crc kubenswrapper[4925]: E0202 10:59:39.309428 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.809398393 +0000 UTC m=+156.813647345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.309915 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:39 crc kubenswrapper[4925]: E0202 10:59:39.310281 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.810272816 +0000 UTC m=+156.814521868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.410740 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:39 crc kubenswrapper[4925]: E0202 10:59:39.411145 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:39.911119777 +0000 UTC m=+156.915368739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.512077 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:39 crc kubenswrapper[4925]: E0202 10:59:39.512681 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:40.012666225 +0000 UTC m=+157.016915177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.536090 4925 csr.go:261] certificate signing request csr-z4896 is approved, waiting to be issued Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.543549 4925 csr.go:257] certificate signing request csr-z4896 is issued Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.612676 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:39 crc kubenswrapper[4925]: E0202 10:59:39.613039 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:40.113008942 +0000 UTC m=+157.117257904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.716877 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.716922 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.716941 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.716992 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.717031 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:39 crc kubenswrapper[4925]: E0202 10:59:39.717303 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:40.217291593 +0000 UTC m=+157.221540555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.722464 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.722851 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.723224 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.725674 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.782165 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.795325 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.805530 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.817729 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:39 crc kubenswrapper[4925]: E0202 10:59:39.817889 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:40.317858616 +0000 UTC m=+157.322107588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.818326 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:39 crc kubenswrapper[4925]: E0202 10:59:39.818695 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:40.318685677 +0000 UTC m=+157.322934709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.868601 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:59:39 crc kubenswrapper[4925]: [-]has-synced failed: reason withheld Feb 02 10:59:39 crc kubenswrapper[4925]: [+]process-running ok Feb 02 10:59:39 crc kubenswrapper[4925]: healthz check failed Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.868659 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:39 crc kubenswrapper[4925]: I0202 10:59:39.919910 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:39 crc kubenswrapper[4925]: E0202 10:59:39.920596 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:40.420564075 +0000 UTC m=+157.424813057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.030851 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:40 crc kubenswrapper[4925]: E0202 10:59:40.031171 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:40.531158822 +0000 UTC m=+157.535407784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.133687 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:40 crc kubenswrapper[4925]: E0202 10:59:40.134122 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:40.634105027 +0000 UTC m=+157.638353979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.235507 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:40 crc kubenswrapper[4925]: E0202 10:59:40.236267 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:40.736251262 +0000 UTC m=+157.740500224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.321359 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gd5mx" podStartSLOduration=129.321344586 podStartE2EDuration="2m9.321344586s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:37.644126688 +0000 UTC m=+154.648375650" watchObservedRunningTime="2026-02-02 10:59:40.321344586 +0000 UTC m=+157.325593548" Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.336427 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:40 crc kubenswrapper[4925]: E0202 10:59:40.336798 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:40.836782764 +0000 UTC m=+157.841031726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.437807 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:40 crc kubenswrapper[4925]: E0202 10:59:40.438244 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:40.93823177 +0000 UTC m=+157.942480732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.467820 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d708ed7515c7d77883ad0d25d1f39b283c2553564fc16382e954eca55de07116"} Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.538438 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:40 crc kubenswrapper[4925]: E0202 10:59:40.538827 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:41.038811593 +0000 UTC m=+158.043060555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.544468 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-02 10:54:39 +0000 UTC, rotation deadline is 2026-12-15 09:23:38.767337517 +0000 UTC Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.544507 4925 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7582h23m58.222832574s for next certificate rotation Feb 02 10:59:40 crc kubenswrapper[4925]: W0202 10:59:40.549363 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-9bae89a3af238c1c28c85edb3d3c76815a2629c3f32158853cea714c60dbc55b WatchSource:0}: Error finding container 9bae89a3af238c1c28c85edb3d3c76815a2629c3f32158853cea714c60dbc55b: Status 404 returned error can't find the container with id 9bae89a3af238c1c28c85edb3d3c76815a2629c3f32158853cea714c60dbc55b Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.639733 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:40 crc kubenswrapper[4925]: E0202 10:59:40.640404 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:41.140390602 +0000 UTC m=+158.144639564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.740988 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:40 crc kubenswrapper[4925]: E0202 10:59:40.741194 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:41.24116591 +0000 UTC m=+158.245414872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.741270 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:40 crc kubenswrapper[4925]: E0202 10:59:40.741581 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:41.241574591 +0000 UTC m=+158.245823553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.842884 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:40 crc kubenswrapper[4925]: E0202 10:59:40.843003 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:41.342985566 +0000 UTC m=+158.347234528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.843140 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:40 crc kubenswrapper[4925]: E0202 10:59:40.843425 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:41.343417878 +0000 UTC m=+158.347666830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.866892 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:59:40 crc kubenswrapper[4925]: [-]has-synced failed: reason withheld Feb 02 10:59:40 crc kubenswrapper[4925]: [+]process-running ok Feb 02 10:59:40 crc kubenswrapper[4925]: healthz check failed Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.866952 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:40 crc kubenswrapper[4925]: I0202 10:59:40.944343 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:40 crc kubenswrapper[4925]: E0202 10:59:40.944741 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:41.44472512 +0000 UTC m=+158.448974082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.045980 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:41 crc kubenswrapper[4925]: E0202 10:59:41.046454 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:41.546429323 +0000 UTC m=+158.550678285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.148076 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:41 crc kubenswrapper[4925]: E0202 10:59:41.148221 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:41.648193547 +0000 UTC m=+158.652442509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.148409 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:41 crc kubenswrapper[4925]: E0202 10:59:41.148748 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:41.648736001 +0000 UTC m=+158.652985033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.219643 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.220304 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.228592 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.229099 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.249956 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:41 crc kubenswrapper[4925]: E0202 10:59:41.250169 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:41.750127796 +0000 UTC m=+158.754376748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.250322 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:41 crc kubenswrapper[4925]: E0202 10:59:41.250622 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:41.750614479 +0000 UTC m=+158.754863441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.279389 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.351257 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:41 crc kubenswrapper[4925]: E0202 10:59:41.351459 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:41.851433178 +0000 UTC m=+158.855682130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.351538 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16df83ed-b915-4a95-8753-e25975e1d2ea-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"16df83ed-b915-4a95-8753-e25975e1d2ea\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.351600 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/16df83ed-b915-4a95-8753-e25975e1d2ea-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"16df83ed-b915-4a95-8753-e25975e1d2ea\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.351761 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:41 crc kubenswrapper[4925]: E0202 10:59:41.352496 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:41.852329521 +0000 UTC m=+158.856578483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.401344 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xf5qm"] Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.402781 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xf5qm" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.404869 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.421135 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xf5qm"] Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.444625 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hlntp" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.453581 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:41 crc kubenswrapper[4925]: E0202 10:59:41.453686 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:41.953670725 +0000 UTC m=+158.957919687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.453914 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:41 crc kubenswrapper[4925]: E0202 10:59:41.454209 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:41.954202039 +0000 UTC m=+158.958451001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.454292 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16df83ed-b915-4a95-8753-e25975e1d2ea-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"16df83ed-b915-4a95-8753-e25975e1d2ea\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.454343 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/16df83ed-b915-4a95-8753-e25975e1d2ea-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"16df83ed-b915-4a95-8753-e25975e1d2ea\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.454563 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/16df83ed-b915-4a95-8753-e25975e1d2ea-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"16df83ed-b915-4a95-8753-e25975e1d2ea\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.475745 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16df83ed-b915-4a95-8753-e25975e1d2ea-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"16df83ed-b915-4a95-8753-e25975e1d2ea\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.489083 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jhw58" event={"ID":"f6f710ee-2823-4865-890e-1506e7eca156","Type":"ContainerStarted","Data":"b58a3e520cf6b0ed19611040de668bb6d1c9fcac889e736a5c4061ce1c61fa36"} Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.491812 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c563e5c497e5505513392000f2cbc591a071d1541949fe50bf8596d76621dc41"} Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.492044 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.506498 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"74d1898851cc15623767e22dc8fecb3a1bff404b140d776b6fed8e7d936fb2c6"} Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.506552 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b17a7a24e4c3803bb55128ed53fda09104e96b20f5afb7931a90bee8cad7b33a"} Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.515335 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3e9204f7bf3eed7da822807eef83039c25a2bf2f5f741245bf2735dc660438be"} Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.515393 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9bae89a3af238c1c28c85edb3d3c76815a2629c3f32158853cea714c60dbc55b"} Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.534050 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.558263 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.558656 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51968f99-bd7d-4958-bb6f-ba8035b2e637-utilities\") pod \"community-operators-xf5qm\" (UID: \"51968f99-bd7d-4958-bb6f-ba8035b2e637\") " pod="openshift-marketplace/community-operators-xf5qm" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.558768 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdjsx\" (UniqueName: \"kubernetes.io/projected/51968f99-bd7d-4958-bb6f-ba8035b2e637-kube-api-access-pdjsx\") pod \"community-operators-xf5qm\" (UID: \"51968f99-bd7d-4958-bb6f-ba8035b2e637\") " pod="openshift-marketplace/community-operators-xf5qm" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.558918 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51968f99-bd7d-4958-bb6f-ba8035b2e637-catalog-content\") pod \"community-operators-xf5qm\" (UID: \"51968f99-bd7d-4958-bb6f-ba8035b2e637\") " pod="openshift-marketplace/community-operators-xf5qm" Feb 02 10:59:41 crc kubenswrapper[4925]: E0202 10:59:41.559766 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:42.059741363 +0000 UTC m=+159.063990325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.607151 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gcc4h"] Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.612669 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gcc4h"] Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.612832 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gcc4h" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.616202 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.660191 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51968f99-bd7d-4958-bb6f-ba8035b2e637-utilities\") pod \"community-operators-xf5qm\" (UID: \"51968f99-bd7d-4958-bb6f-ba8035b2e637\") " pod="openshift-marketplace/community-operators-xf5qm" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.660226 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdjsx\" (UniqueName: \"kubernetes.io/projected/51968f99-bd7d-4958-bb6f-ba8035b2e637-kube-api-access-pdjsx\") pod \"community-operators-xf5qm\" (UID: \"51968f99-bd7d-4958-bb6f-ba8035b2e637\") " pod="openshift-marketplace/community-operators-xf5qm" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.660255 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.660317 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51968f99-bd7d-4958-bb6f-ba8035b2e637-catalog-content\") pod \"community-operators-xf5qm\" (UID: \"51968f99-bd7d-4958-bb6f-ba8035b2e637\") " pod="openshift-marketplace/community-operators-xf5qm" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.660706 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51968f99-bd7d-4958-bb6f-ba8035b2e637-catalog-content\") pod \"community-operators-xf5qm\" (UID: \"51968f99-bd7d-4958-bb6f-ba8035b2e637\") " pod="openshift-marketplace/community-operators-xf5qm" Feb 02 10:59:41 crc kubenswrapper[4925]: E0202 10:59:41.660991 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:42.160972013 +0000 UTC m=+159.165220975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.661659 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51968f99-bd7d-4958-bb6f-ba8035b2e637-utilities\") pod \"community-operators-xf5qm\" (UID: \"51968f99-bd7d-4958-bb6f-ba8035b2e637\") " pod="openshift-marketplace/community-operators-xf5qm" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.679554 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdjsx\" (UniqueName: \"kubernetes.io/projected/51968f99-bd7d-4958-bb6f-ba8035b2e637-kube-api-access-pdjsx\") pod \"community-operators-xf5qm\" (UID: \"51968f99-bd7d-4958-bb6f-ba8035b2e637\") " pod="openshift-marketplace/community-operators-xf5qm" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.718357 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xf5qm" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.762766 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:41 crc kubenswrapper[4925]: E0202 10:59:41.762975 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:42.262944933 +0000 UTC m=+159.267193895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.763035 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wclhj\" (UniqueName: \"kubernetes.io/projected/0f7aa95c-3861-48ab-a30f-0301aad169d7-kube-api-access-wclhj\") pod \"certified-operators-gcc4h\" (UID: \"0f7aa95c-3861-48ab-a30f-0301aad169d7\") " pod="openshift-marketplace/certified-operators-gcc4h" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.763119 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f7aa95c-3861-48ab-a30f-0301aad169d7-catalog-content\") pod \"certified-operators-gcc4h\" (UID: \"0f7aa95c-3861-48ab-a30f-0301aad169d7\") " pod="openshift-marketplace/certified-operators-gcc4h" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.763174 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.763202 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f7aa95c-3861-48ab-a30f-0301aad169d7-utilities\") pod \"certified-operators-gcc4h\" (UID: \"0f7aa95c-3861-48ab-a30f-0301aad169d7\") " pod="openshift-marketplace/certified-operators-gcc4h" Feb 02 10:59:41 crc kubenswrapper[4925]: E0202 10:59:41.763562 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:42.263550179 +0000 UTC m=+159.267799141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.799697 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pn6rl"] Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.801282 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn6rl" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.811226 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pn6rl"] Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.825675 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.864155 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.864437 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f7aa95c-3861-48ab-a30f-0301aad169d7-catalog-content\") pod \"certified-operators-gcc4h\" (UID: \"0f7aa95c-3861-48ab-a30f-0301aad169d7\") " pod="openshift-marketplace/certified-operators-gcc4h" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.864490 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c2cde5-148b-44c2-821a-a470122f1167-utilities\") pod \"community-operators-pn6rl\" (UID: \"46c2cde5-148b-44c2-821a-a470122f1167\") " pod="openshift-marketplace/community-operators-pn6rl" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.864511 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c2cde5-148b-44c2-821a-a470122f1167-catalog-content\") pod \"community-operators-pn6rl\" (UID: \"46c2cde5-148b-44c2-821a-a470122f1167\") " pod="openshift-marketplace/community-operators-pn6rl" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.864530 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f7aa95c-3861-48ab-a30f-0301aad169d7-utilities\") pod \"certified-operators-gcc4h\" (UID: \"0f7aa95c-3861-48ab-a30f-0301aad169d7\") " pod="openshift-marketplace/certified-operators-gcc4h" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.864564 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6tcs\" (UniqueName: \"kubernetes.io/projected/46c2cde5-148b-44c2-821a-a470122f1167-kube-api-access-d6tcs\") pod \"community-operators-pn6rl\" (UID: \"46c2cde5-148b-44c2-821a-a470122f1167\") " pod="openshift-marketplace/community-operators-pn6rl" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.864586 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wclhj\" (UniqueName: \"kubernetes.io/projected/0f7aa95c-3861-48ab-a30f-0301aad169d7-kube-api-access-wclhj\") pod \"certified-operators-gcc4h\" (UID: \"0f7aa95c-3861-48ab-a30f-0301aad169d7\") " pod="openshift-marketplace/certified-operators-gcc4h" Feb 02 10:59:41 crc kubenswrapper[4925]: E0202 10:59:41.865055 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:42.365025185 +0000 UTC m=+159.369274147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.865369 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f7aa95c-3861-48ab-a30f-0301aad169d7-catalog-content\") pod \"certified-operators-gcc4h\" (UID: \"0f7aa95c-3861-48ab-a30f-0301aad169d7\") " pod="openshift-marketplace/certified-operators-gcc4h" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.865431 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f7aa95c-3861-48ab-a30f-0301aad169d7-utilities\") pod \"certified-operators-gcc4h\" (UID: \"0f7aa95c-3861-48ab-a30f-0301aad169d7\") " pod="openshift-marketplace/certified-operators-gcc4h" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.873575 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:59:41 crc kubenswrapper[4925]: [-]has-synced failed: reason withheld Feb 02 10:59:41 crc kubenswrapper[4925]: [+]process-running ok Feb 02 10:59:41 crc kubenswrapper[4925]: healthz check failed Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.873637 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.884941 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wclhj\" (UniqueName: \"kubernetes.io/projected/0f7aa95c-3861-48ab-a30f-0301aad169d7-kube-api-access-wclhj\") pod \"certified-operators-gcc4h\" (UID: \"0f7aa95c-3861-48ab-a30f-0301aad169d7\") " pod="openshift-marketplace/certified-operators-gcc4h" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.944771 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gcc4h" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.965914 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.965945 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c2cde5-148b-44c2-821a-a470122f1167-utilities\") pod \"community-operators-pn6rl\" (UID: \"46c2cde5-148b-44c2-821a-a470122f1167\") " pod="openshift-marketplace/community-operators-pn6rl" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.965969 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c2cde5-148b-44c2-821a-a470122f1167-catalog-content\") pod \"community-operators-pn6rl\" (UID: \"46c2cde5-148b-44c2-821a-a470122f1167\") " pod="openshift-marketplace/community-operators-pn6rl" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.966004 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6tcs\" (UniqueName: \"kubernetes.io/projected/46c2cde5-148b-44c2-821a-a470122f1167-kube-api-access-d6tcs\") pod \"community-operators-pn6rl\" (UID: \"46c2cde5-148b-44c2-821a-a470122f1167\") " pod="openshift-marketplace/community-operators-pn6rl" Feb 02 10:59:41 crc kubenswrapper[4925]: E0202 10:59:41.966691 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:42.466674106 +0000 UTC m=+159.470923068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.967758 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c2cde5-148b-44c2-821a-a470122f1167-catalog-content\") pod \"community-operators-pn6rl\" (UID: \"46c2cde5-148b-44c2-821a-a470122f1167\") " pod="openshift-marketplace/community-operators-pn6rl" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.969207 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c2cde5-148b-44c2-821a-a470122f1167-utilities\") pod \"community-operators-pn6rl\" (UID: \"46c2cde5-148b-44c2-821a-a470122f1167\") " pod="openshift-marketplace/community-operators-pn6rl" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.985900 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6tcs\" (UniqueName: \"kubernetes.io/projected/46c2cde5-148b-44c2-821a-a470122f1167-kube-api-access-d6tcs\") pod \"community-operators-pn6rl\" (UID: \"46c2cde5-148b-44c2-821a-a470122f1167\") " pod="openshift-marketplace/community-operators-pn6rl" Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.988586 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xf5qm"] Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.996734 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qncpv"] Feb 02 10:59:41 crc kubenswrapper[4925]: I0202 10:59:41.997736 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qncpv" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.007357 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qncpv"] Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.038090 4925 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 02 10:59:42 crc kubenswrapper[4925]: W0202 10:59:42.045092 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51968f99_bd7d_4958_bb6f_ba8035b2e637.slice/crio-949bb788edd4655f009729a5f3fb060b1bcc9d6241b312694343fc2857aff924 WatchSource:0}: Error finding container 949bb788edd4655f009729a5f3fb060b1bcc9d6241b312694343fc2857aff924: Status 404 returned error can't find the container with id 949bb788edd4655f009729a5f3fb060b1bcc9d6241b312694343fc2857aff924 Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.067267 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.067437 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-catalog-content\") pod \"certified-operators-qncpv\" (UID: \"bf7432d2-d4a9-4fa9-8570-e76d21c8e771\") " pod="openshift-marketplace/certified-operators-qncpv" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.067502 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56pv6\" (UniqueName: \"kubernetes.io/projected/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-kube-api-access-56pv6\") pod \"certified-operators-qncpv\" (UID: \"bf7432d2-d4a9-4fa9-8570-e76d21c8e771\") " pod="openshift-marketplace/certified-operators-qncpv" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.067598 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-utilities\") pod \"certified-operators-qncpv\" (UID: \"bf7432d2-d4a9-4fa9-8570-e76d21c8e771\") " pod="openshift-marketplace/certified-operators-qncpv" Feb 02 10:59:42 crc kubenswrapper[4925]: E0202 10:59:42.067718 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:42.567701132 +0000 UTC m=+159.571950094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.156721 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn6rl" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.179984 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56pv6\" (UniqueName: \"kubernetes.io/projected/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-kube-api-access-56pv6\") pod \"certified-operators-qncpv\" (UID: \"bf7432d2-d4a9-4fa9-8570-e76d21c8e771\") " pod="openshift-marketplace/certified-operators-qncpv" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.180223 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-utilities\") pod \"certified-operators-qncpv\" (UID: \"bf7432d2-d4a9-4fa9-8570-e76d21c8e771\") " pod="openshift-marketplace/certified-operators-qncpv" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.180332 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.180399 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-catalog-content\") pod \"certified-operators-qncpv\" (UID: \"bf7432d2-d4a9-4fa9-8570-e76d21c8e771\") " pod="openshift-marketplace/certified-operators-qncpv" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.181276 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-utilities\") pod \"certified-operators-qncpv\" (UID: \"bf7432d2-d4a9-4fa9-8570-e76d21c8e771\") " pod="openshift-marketplace/certified-operators-qncpv" Feb 02 10:59:42 crc kubenswrapper[4925]: E0202 10:59:42.181550 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:59:42.681537754 +0000 UTC m=+159.685786716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-56md8" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.182330 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-catalog-content\") pod \"certified-operators-qncpv\" (UID: \"bf7432d2-d4a9-4fa9-8570-e76d21c8e771\") " pod="openshift-marketplace/certified-operators-qncpv" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.204116 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56pv6\" (UniqueName: \"kubernetes.io/projected/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-kube-api-access-56pv6\") pod \"certified-operators-qncpv\" (UID: \"bf7432d2-d4a9-4fa9-8570-e76d21c8e771\") " pod="openshift-marketplace/certified-operators-qncpv" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.256246 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.257249 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.271576 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.286980 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:42 crc kubenswrapper[4925]: E0202 10:59:42.287484 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:59:42.787463558 +0000 UTC m=+159.791712520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.309233 4925 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-02T10:59:42.038116081Z","Handler":null,"Name":""} Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.336031 4925 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.336097 4925 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.340533 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qncpv" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.364530 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.390539 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.435323 4925 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.435378 4925 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.461883 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gcc4h"] Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.471525 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pn6rl"] Feb 02 10:59:42 crc kubenswrapper[4925]: W0202 10:59:42.472524 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f7aa95c_3861_48ab_a30f_0301aad169d7.slice/crio-a3c11c398fabe8655d99143af21a81855162257d3a530761add9d99c2d0c3d7c WatchSource:0}: Error finding container a3c11c398fabe8655d99143af21a81855162257d3a530761add9d99c2d0c3d7c: Status 404 returned error can't find the container with id a3c11c398fabe8655d99143af21a81855162257d3a530761add9d99c2d0c3d7c Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.492533 4925 patch_prober.go:28] interesting pod/apiserver-76f77b778f-68qpw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 10:59:42 crc kubenswrapper[4925]: [+]log ok Feb 02 10:59:42 crc kubenswrapper[4925]: [+]etcd ok Feb 02 10:59:42 crc kubenswrapper[4925]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 10:59:42 crc kubenswrapper[4925]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 10:59:42 crc kubenswrapper[4925]: [+]poststarthook/max-in-flight-filter ok Feb 02 10:59:42 crc kubenswrapper[4925]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 10:59:42 crc kubenswrapper[4925]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 02 10:59:42 crc kubenswrapper[4925]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 02 10:59:42 crc kubenswrapper[4925]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 02 10:59:42 crc kubenswrapper[4925]: [+]poststarthook/project.openshift.io-projectcache ok Feb 02 10:59:42 crc kubenswrapper[4925]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 02 10:59:42 crc kubenswrapper[4925]: [+]poststarthook/openshift.io-startinformers ok Feb 02 10:59:42 crc kubenswrapper[4925]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 02 10:59:42 crc kubenswrapper[4925]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 10:59:42 crc kubenswrapper[4925]: livez check failed Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.492580 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-68qpw" podUID="628e72ad-1a83-4e42-a5bd-3ab0c710993e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.507176 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-56md8\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.526259 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcc4h" event={"ID":"0f7aa95c-3861-48ab-a30f-0301aad169d7","Type":"ContainerStarted","Data":"a3c11c398fabe8655d99143af21a81855162257d3a530761add9d99c2d0c3d7c"} Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.528640 4925 generic.go:334] "Generic (PLEG): container finished" podID="3a87278a-c899-40df-99ef-324a5415be60" containerID="1c2e4a77dbc7f61e608f64e191cf8dab8fe2d8a4a8eed742bef5af1f476b63eb" exitCode=0 Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.528710 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" event={"ID":"3a87278a-c899-40df-99ef-324a5415be60","Type":"ContainerDied","Data":"1c2e4a77dbc7f61e608f64e191cf8dab8fe2d8a4a8eed742bef5af1f476b63eb"} Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.533986 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jhw58" event={"ID":"f6f710ee-2823-4865-890e-1506e7eca156","Type":"ContainerStarted","Data":"f8653a33a1280d2d729bf40c2f245ea681fc5473a79955bb8ff5c86ce5d1c583"} Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.534033 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jhw58" event={"ID":"f6f710ee-2823-4865-890e-1506e7eca156","Type":"ContainerStarted","Data":"910d70bef31f7c138f794f99639c13d1184f86177a7b1e0d4cef7599d8f5b7de"} Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.544593 4925 generic.go:334] "Generic (PLEG): container finished" podID="51968f99-bd7d-4958-bb6f-ba8035b2e637" containerID="11a3196a80762f3d900ab6b764b740d5ce55b40b9e3f51e9b99f444da7f31e72" exitCode=0 Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.544681 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xf5qm" event={"ID":"51968f99-bd7d-4958-bb6f-ba8035b2e637","Type":"ContainerDied","Data":"11a3196a80762f3d900ab6b764b740d5ce55b40b9e3f51e9b99f444da7f31e72"} Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.544715 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xf5qm" event={"ID":"51968f99-bd7d-4958-bb6f-ba8035b2e637","Type":"ContainerStarted","Data":"949bb788edd4655f009729a5f3fb060b1bcc9d6241b312694343fc2857aff924"} Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.562708 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"16df83ed-b915-4a95-8753-e25975e1d2ea","Type":"ContainerStarted","Data":"188f841a5f45ec2558d15ffa8e34b064882411b24f40c0b5098c6c701b394ab5"} Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.563029 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.564445 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.576755 4925 patch_prober.go:28] interesting pod/console-f9d7485db-nzwbr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.576796 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-nzwbr" podUID="81af45ef-2049-4155-9c0b-ae722e6b8c8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.581396 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c7w8n" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.588415 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-b55lq" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.592931 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.639931 4925 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx9mv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.640007 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qx9mv" podUID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.646236 4925 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx9mv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.646276 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-qx9mv" podUID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.657171 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.660529 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qncpv"] Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.677610 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.775136 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.878859 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:59:42 crc kubenswrapper[4925]: [-]has-synced failed: reason withheld Feb 02 10:59:42 crc kubenswrapper[4925]: [+]process-running ok Feb 02 10:59:42 crc kubenswrapper[4925]: healthz check failed Feb 02 10:59:42 crc kubenswrapper[4925]: I0202 10:59:42.879298 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.000757 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-56md8"] Feb 02 10:59:43 crc kubenswrapper[4925]: W0202 10:59:43.012255 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod421043e2_e94a_4b1b_8571_ea62b753b06d.slice/crio-275d46f12e397b980eb8e70892f4952d509d7a109a3c4133a784aa560b45e8c3 WatchSource:0}: Error finding container 275d46f12e397b980eb8e70892f4952d509d7a109a3c4133a784aa560b45e8c3: Status 404 returned error can't find the container with id 275d46f12e397b980eb8e70892f4952d509d7a109a3c4133a784aa560b45e8c3 Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.399044 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.399362 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.570356 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-56md8" event={"ID":"421043e2-e94a-4b1b-8571-ea62b753b06d","Type":"ContainerStarted","Data":"275d46f12e397b980eb8e70892f4952d509d7a109a3c4133a784aa560b45e8c3"} Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.572172 4925 generic.go:334] "Generic (PLEG): container finished" podID="46c2cde5-148b-44c2-821a-a470122f1167" containerID="be5f384664ea0d94d1e1b10b0911c8814c68724ef0d7d7d75d393120f4a34395" exitCode=0 Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.572229 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn6rl" event={"ID":"46c2cde5-148b-44c2-821a-a470122f1167","Type":"ContainerDied","Data":"be5f384664ea0d94d1e1b10b0911c8814c68724ef0d7d7d75d393120f4a34395"} Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.572247 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn6rl" event={"ID":"46c2cde5-148b-44c2-821a-a470122f1167","Type":"ContainerStarted","Data":"1619d56e1c64057d9623b12a4ff681292d93abcae4ed6c7a528d8f31df72f9aa"} Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.574026 4925 generic.go:334] "Generic (PLEG): container finished" podID="bf7432d2-d4a9-4fa9-8570-e76d21c8e771" containerID="46e1632faa8a0f835647fa9d3cb39091d0e0c50a6e890197868aa5865956ce8d" exitCode=0 Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.574099 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qncpv" event={"ID":"bf7432d2-d4a9-4fa9-8570-e76d21c8e771","Type":"ContainerDied","Data":"46e1632faa8a0f835647fa9d3cb39091d0e0c50a6e890197868aa5865956ce8d"} Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.574140 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qncpv" event={"ID":"bf7432d2-d4a9-4fa9-8570-e76d21c8e771","Type":"ContainerStarted","Data":"cccc4058476dd1dc21596f9f6014bbf7f6fd4e2b501f6866f77252f4351ffea0"} Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.575686 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"16df83ed-b915-4a95-8753-e25975e1d2ea","Type":"ContainerStarted","Data":"ed1518413ece8369fd67fe16ce72a16dd762bbf670c936da41dc0bfa01c27a80"} Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.577064 4925 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.578649 4925 generic.go:334] "Generic (PLEG): container finished" podID="0f7aa95c-3861-48ab-a30f-0301aad169d7" containerID="41fb91a2f7cbf37ce3700bf89e695299d0b0769ddc8bcf08b960a10f259eeb5e" exitCode=0 Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.579907 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcc4h" event={"ID":"0f7aa95c-3861-48ab-a30f-0301aad169d7","Type":"ContainerDied","Data":"41fb91a2f7cbf37ce3700bf89e695299d0b0769ddc8bcf08b960a10f259eeb5e"} Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.602989 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d5f7d"] Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.606132 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5f7d" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.607463 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5f7d"] Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.609093 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.619770 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-jhw58" podStartSLOduration=13.619748121 podStartE2EDuration="13.619748121s" podCreationTimestamp="2026-02-02 10:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:43.618651842 +0000 UTC m=+160.622900824" watchObservedRunningTime="2026-02-02 10:59:43.619748121 +0000 UTC m=+160.623997083" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.626190 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.651610 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.651580171 podStartE2EDuration="2.651580171s" podCreationTimestamp="2026-02-02 10:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:43.650122092 +0000 UTC m=+160.654371074" watchObservedRunningTime="2026-02-02 10:59:43.651580171 +0000 UTC m=+160.655829153" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.721117 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c985f150-ec7d-4175-99a1-8fb775b7d7d9-utilities\") pod \"redhat-marketplace-d5f7d\" (UID: \"c985f150-ec7d-4175-99a1-8fb775b7d7d9\") " pod="openshift-marketplace/redhat-marketplace-d5f7d" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.721399 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsdkw\" (UniqueName: \"kubernetes.io/projected/c985f150-ec7d-4175-99a1-8fb775b7d7d9-kube-api-access-lsdkw\") pod \"redhat-marketplace-d5f7d\" (UID: \"c985f150-ec7d-4175-99a1-8fb775b7d7d9\") " pod="openshift-marketplace/redhat-marketplace-d5f7d" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.721471 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c985f150-ec7d-4175-99a1-8fb775b7d7d9-catalog-content\") pod \"redhat-marketplace-d5f7d\" (UID: \"c985f150-ec7d-4175-99a1-8fb775b7d7d9\") " pod="openshift-marketplace/redhat-marketplace-d5f7d" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.826661 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c985f150-ec7d-4175-99a1-8fb775b7d7d9-utilities\") pod \"redhat-marketplace-d5f7d\" (UID: \"c985f150-ec7d-4175-99a1-8fb775b7d7d9\") " pod="openshift-marketplace/redhat-marketplace-d5f7d" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.826761 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsdkw\" (UniqueName: \"kubernetes.io/projected/c985f150-ec7d-4175-99a1-8fb775b7d7d9-kube-api-access-lsdkw\") pod \"redhat-marketplace-d5f7d\" (UID: \"c985f150-ec7d-4175-99a1-8fb775b7d7d9\") " pod="openshift-marketplace/redhat-marketplace-d5f7d" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.826783 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c985f150-ec7d-4175-99a1-8fb775b7d7d9-catalog-content\") pod \"redhat-marketplace-d5f7d\" (UID: \"c985f150-ec7d-4175-99a1-8fb775b7d7d9\") " pod="openshift-marketplace/redhat-marketplace-d5f7d" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.827214 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c985f150-ec7d-4175-99a1-8fb775b7d7d9-catalog-content\") pod \"redhat-marketplace-d5f7d\" (UID: \"c985f150-ec7d-4175-99a1-8fb775b7d7d9\") " pod="openshift-marketplace/redhat-marketplace-d5f7d" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.827419 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c985f150-ec7d-4175-99a1-8fb775b7d7d9-utilities\") pod \"redhat-marketplace-d5f7d\" (UID: \"c985f150-ec7d-4175-99a1-8fb775b7d7d9\") " pod="openshift-marketplace/redhat-marketplace-d5f7d" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.845836 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsdkw\" (UniqueName: \"kubernetes.io/projected/c985f150-ec7d-4175-99a1-8fb775b7d7d9-kube-api-access-lsdkw\") pod \"redhat-marketplace-d5f7d\" (UID: \"c985f150-ec7d-4175-99a1-8fb775b7d7d9\") " pod="openshift-marketplace/redhat-marketplace-d5f7d" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.863615 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.871881 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:59:43 crc kubenswrapper[4925]: [-]has-synced failed: reason withheld Feb 02 10:59:43 crc kubenswrapper[4925]: [+]process-running ok Feb 02 10:59:43 crc kubenswrapper[4925]: healthz check failed Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.871925 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.877755 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sz8zf" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.878059 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" Feb 02 10:59:43 crc kubenswrapper[4925]: I0202 10:59:43.926362 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5f7d" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.000044 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-26nhd"] Feb 02 10:59:44 crc kubenswrapper[4925]: E0202 10:59:44.000499 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a87278a-c899-40df-99ef-324a5415be60" containerName="collect-profiles" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.000588 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a87278a-c899-40df-99ef-324a5415be60" containerName="collect-profiles" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.000746 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a87278a-c899-40df-99ef-324a5415be60" containerName="collect-profiles" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.001500 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26nhd" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.008780 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-26nhd"] Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.028297 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw447\" (UniqueName: \"kubernetes.io/projected/3a87278a-c899-40df-99ef-324a5415be60-kube-api-access-lw447\") pod \"3a87278a-c899-40df-99ef-324a5415be60\" (UID: \"3a87278a-c899-40df-99ef-324a5415be60\") " Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.028405 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a87278a-c899-40df-99ef-324a5415be60-secret-volume\") pod \"3a87278a-c899-40df-99ef-324a5415be60\" (UID: \"3a87278a-c899-40df-99ef-324a5415be60\") " Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.028471 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a87278a-c899-40df-99ef-324a5415be60-config-volume\") pod \"3a87278a-c899-40df-99ef-324a5415be60\" (UID: \"3a87278a-c899-40df-99ef-324a5415be60\") " Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.032561 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a87278a-c899-40df-99ef-324a5415be60-config-volume" (OuterVolumeSpecName: "config-volume") pod "3a87278a-c899-40df-99ef-324a5415be60" (UID: "3a87278a-c899-40df-99ef-324a5415be60"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.033515 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a87278a-c899-40df-99ef-324a5415be60-kube-api-access-lw447" (OuterVolumeSpecName: "kube-api-access-lw447") pod "3a87278a-c899-40df-99ef-324a5415be60" (UID: "3a87278a-c899-40df-99ef-324a5415be60"). InnerVolumeSpecName "kube-api-access-lw447". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.034424 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a87278a-c899-40df-99ef-324a5415be60-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3a87278a-c899-40df-99ef-324a5415be60" (UID: "3a87278a-c899-40df-99ef-324a5415be60"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.130241 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/560d449d-bbfb-4f5b-a14f-4a26175a20d2-utilities\") pod \"redhat-marketplace-26nhd\" (UID: \"560d449d-bbfb-4f5b-a14f-4a26175a20d2\") " pod="openshift-marketplace/redhat-marketplace-26nhd" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.130695 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr9gv\" (UniqueName: \"kubernetes.io/projected/560d449d-bbfb-4f5b-a14f-4a26175a20d2-kube-api-access-fr9gv\") pod \"redhat-marketplace-26nhd\" (UID: \"560d449d-bbfb-4f5b-a14f-4a26175a20d2\") " pod="openshift-marketplace/redhat-marketplace-26nhd" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.130735 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/560d449d-bbfb-4f5b-a14f-4a26175a20d2-catalog-content\") pod \"redhat-marketplace-26nhd\" (UID: \"560d449d-bbfb-4f5b-a14f-4a26175a20d2\") " pod="openshift-marketplace/redhat-marketplace-26nhd" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.130922 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw447\" (UniqueName: \"kubernetes.io/projected/3a87278a-c899-40df-99ef-324a5415be60-kube-api-access-lw447\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.130956 4925 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a87278a-c899-40df-99ef-324a5415be60-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.130973 4925 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a87278a-c899-40df-99ef-324a5415be60-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.160670 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9cfdj" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.167989 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5f7d"] Feb 02 10:59:44 crc kubenswrapper[4925]: W0202 10:59:44.184296 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc985f150_ec7d_4175_99a1_8fb775b7d7d9.slice/crio-97b056fbbbbe7ca947e4512af405aa7efaabf4804e2c10d556e1733cd833b4db WatchSource:0}: Error finding container 97b056fbbbbe7ca947e4512af405aa7efaabf4804e2c10d556e1733cd833b4db: Status 404 returned error can't find the container with id 97b056fbbbbe7ca947e4512af405aa7efaabf4804e2c10d556e1733cd833b4db Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.209984 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v6b8t" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.231593 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr9gv\" (UniqueName: \"kubernetes.io/projected/560d449d-bbfb-4f5b-a14f-4a26175a20d2-kube-api-access-fr9gv\") pod \"redhat-marketplace-26nhd\" (UID: \"560d449d-bbfb-4f5b-a14f-4a26175a20d2\") " pod="openshift-marketplace/redhat-marketplace-26nhd" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.231641 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/560d449d-bbfb-4f5b-a14f-4a26175a20d2-catalog-content\") pod \"redhat-marketplace-26nhd\" (UID: \"560d449d-bbfb-4f5b-a14f-4a26175a20d2\") " pod="openshift-marketplace/redhat-marketplace-26nhd" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.231717 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/560d449d-bbfb-4f5b-a14f-4a26175a20d2-utilities\") pod \"redhat-marketplace-26nhd\" (UID: \"560d449d-bbfb-4f5b-a14f-4a26175a20d2\") " pod="openshift-marketplace/redhat-marketplace-26nhd" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.232283 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/560d449d-bbfb-4f5b-a14f-4a26175a20d2-catalog-content\") pod \"redhat-marketplace-26nhd\" (UID: \"560d449d-bbfb-4f5b-a14f-4a26175a20d2\") " pod="openshift-marketplace/redhat-marketplace-26nhd" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.232377 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/560d449d-bbfb-4f5b-a14f-4a26175a20d2-utilities\") pod \"redhat-marketplace-26nhd\" (UID: \"560d449d-bbfb-4f5b-a14f-4a26175a20d2\") " pod="openshift-marketplace/redhat-marketplace-26nhd" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.250970 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr9gv\" (UniqueName: \"kubernetes.io/projected/560d449d-bbfb-4f5b-a14f-4a26175a20d2-kube-api-access-fr9gv\") pod \"redhat-marketplace-26nhd\" (UID: \"560d449d-bbfb-4f5b-a14f-4a26175a20d2\") " pod="openshift-marketplace/redhat-marketplace-26nhd" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.317603 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26nhd" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.527148 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-26nhd"] Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.530982 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.531961 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.537508 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.540535 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.540549 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.594735 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gmldm"] Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.596136 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gmldm" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.601420 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.607918 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" event={"ID":"3a87278a-c899-40df-99ef-324a5415be60","Type":"ContainerDied","Data":"2830db1531ef7294234f0ff6c218788a895cffa4e891135127fd25cc05a020a0"} Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.608010 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2830db1531ef7294234f0ff6c218788a895cffa4e891135127fd25cc05a020a0" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.607978 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.608891 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gmldm"] Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.612742 4925 generic.go:334] "Generic (PLEG): container finished" podID="c985f150-ec7d-4175-99a1-8fb775b7d7d9" containerID="a94b4ffd487b418c228342c2c2f7b298d7aded11f5295dec99ca94f10b9a3369" exitCode=0 Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.612810 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5f7d" event={"ID":"c985f150-ec7d-4175-99a1-8fb775b7d7d9","Type":"ContainerDied","Data":"a94b4ffd487b418c228342c2c2f7b298d7aded11f5295dec99ca94f10b9a3369"} Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.612836 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5f7d" event={"ID":"c985f150-ec7d-4175-99a1-8fb775b7d7d9","Type":"ContainerStarted","Data":"97b056fbbbbe7ca947e4512af405aa7efaabf4804e2c10d556e1733cd833b4db"} Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.614245 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-56md8" event={"ID":"421043e2-e94a-4b1b-8571-ea62b753b06d","Type":"ContainerStarted","Data":"b70bba44e5ebf00449a566da6858fccaab78682a3d12b02f822e6b90a9f107a8"} Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.614347 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.615266 4925 generic.go:334] "Generic (PLEG): container finished" podID="16df83ed-b915-4a95-8753-e25975e1d2ea" containerID="ed1518413ece8369fd67fe16ce72a16dd762bbf670c936da41dc0bfa01c27a80" exitCode=0 Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.615368 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"16df83ed-b915-4a95-8753-e25975e1d2ea","Type":"ContainerDied","Data":"ed1518413ece8369fd67fe16ce72a16dd762bbf670c936da41dc0bfa01c27a80"} Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.616230 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26nhd" event={"ID":"560d449d-bbfb-4f5b-a14f-4a26175a20d2","Type":"ContainerStarted","Data":"059eee2aede03a3a21b0bee97a6cd8627095b4698c08b1c31476783cef7ebe55"} Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.639173 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f07ca5d9-b793-471e-9d88-3f48063053e3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f07ca5d9-b793-471e-9d88-3f48063053e3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.639270 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f07ca5d9-b793-471e-9d88-3f48063053e3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f07ca5d9-b793-471e-9d88-3f48063053e3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.656582 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-56md8" podStartSLOduration=133.65656089 podStartE2EDuration="2m13.65656089s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:44.65465111 +0000 UTC m=+161.658900072" watchObservedRunningTime="2026-02-02 10:59:44.65656089 +0000 UTC m=+161.660809852" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.740884 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lchfb\" (UniqueName: \"kubernetes.io/projected/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-kube-api-access-lchfb\") pod \"redhat-operators-gmldm\" (UID: \"b01fd158-f4e2-4ec8-953b-12dae9c49dd7\") " pod="openshift-marketplace/redhat-operators-gmldm" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.740994 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-utilities\") pod \"redhat-operators-gmldm\" (UID: \"b01fd158-f4e2-4ec8-953b-12dae9c49dd7\") " pod="openshift-marketplace/redhat-operators-gmldm" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.741021 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-catalog-content\") pod \"redhat-operators-gmldm\" (UID: \"b01fd158-f4e2-4ec8-953b-12dae9c49dd7\") " pod="openshift-marketplace/redhat-operators-gmldm" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.741046 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f07ca5d9-b793-471e-9d88-3f48063053e3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f07ca5d9-b793-471e-9d88-3f48063053e3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.741262 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f07ca5d9-b793-471e-9d88-3f48063053e3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f07ca5d9-b793-471e-9d88-3f48063053e3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.742002 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f07ca5d9-b793-471e-9d88-3f48063053e3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f07ca5d9-b793-471e-9d88-3f48063053e3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.762241 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f07ca5d9-b793-471e-9d88-3f48063053e3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f07ca5d9-b793-471e-9d88-3f48063053e3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.842061 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-catalog-content\") pod \"redhat-operators-gmldm\" (UID: \"b01fd158-f4e2-4ec8-953b-12dae9c49dd7\") " pod="openshift-marketplace/redhat-operators-gmldm" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.842206 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lchfb\" (UniqueName: \"kubernetes.io/projected/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-kube-api-access-lchfb\") pod \"redhat-operators-gmldm\" (UID: \"b01fd158-f4e2-4ec8-953b-12dae9c49dd7\") " pod="openshift-marketplace/redhat-operators-gmldm" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.842263 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-utilities\") pod \"redhat-operators-gmldm\" (UID: \"b01fd158-f4e2-4ec8-953b-12dae9c49dd7\") " pod="openshift-marketplace/redhat-operators-gmldm" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.842737 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-utilities\") pod \"redhat-operators-gmldm\" (UID: \"b01fd158-f4e2-4ec8-953b-12dae9c49dd7\") " pod="openshift-marketplace/redhat-operators-gmldm" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.842877 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-catalog-content\") pod \"redhat-operators-gmldm\" (UID: \"b01fd158-f4e2-4ec8-953b-12dae9c49dd7\") " pod="openshift-marketplace/redhat-operators-gmldm" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.857337 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lchfb\" (UniqueName: \"kubernetes.io/projected/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-kube-api-access-lchfb\") pod \"redhat-operators-gmldm\" (UID: \"b01fd158-f4e2-4ec8-953b-12dae9c49dd7\") " pod="openshift-marketplace/redhat-operators-gmldm" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.862159 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.865386 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:59:44 crc kubenswrapper[4925]: [-]has-synced failed: reason withheld Feb 02 10:59:44 crc kubenswrapper[4925]: [+]process-running ok Feb 02 10:59:44 crc kubenswrapper[4925]: healthz check failed Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.865439 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.955306 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gmldm" Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.996497 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tp28w"] Feb 02 10:59:44 crc kubenswrapper[4925]: I0202 10:59:44.997736 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp28w" Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.008030 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tp28w"] Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.119148 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.147308 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1044ab1-2d86-4f71-995a-5994d6b2262e-utilities\") pod \"redhat-operators-tp28w\" (UID: \"c1044ab1-2d86-4f71-995a-5994d6b2262e\") " pod="openshift-marketplace/redhat-operators-tp28w" Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.147473 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbkxp\" (UniqueName: \"kubernetes.io/projected/c1044ab1-2d86-4f71-995a-5994d6b2262e-kube-api-access-dbkxp\") pod \"redhat-operators-tp28w\" (UID: \"c1044ab1-2d86-4f71-995a-5994d6b2262e\") " pod="openshift-marketplace/redhat-operators-tp28w" Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.147549 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1044ab1-2d86-4f71-995a-5994d6b2262e-catalog-content\") pod \"redhat-operators-tp28w\" (UID: \"c1044ab1-2d86-4f71-995a-5994d6b2262e\") " pod="openshift-marketplace/redhat-operators-tp28w" Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.179099 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gmldm"] Feb 02 10:59:45 crc kubenswrapper[4925]: W0202 10:59:45.183797 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb01fd158_f4e2_4ec8_953b_12dae9c49dd7.slice/crio-4c352cb4fd3c210467067bbe055d853399e9a729a1c47ef7539a4e73ba68620f WatchSource:0}: Error finding container 4c352cb4fd3c210467067bbe055d853399e9a729a1c47ef7539a4e73ba68620f: Status 404 returned error can't find the container with id 4c352cb4fd3c210467067bbe055d853399e9a729a1c47ef7539a4e73ba68620f Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.248733 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1044ab1-2d86-4f71-995a-5994d6b2262e-catalog-content\") pod \"redhat-operators-tp28w\" (UID: \"c1044ab1-2d86-4f71-995a-5994d6b2262e\") " pod="openshift-marketplace/redhat-operators-tp28w" Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.248805 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1044ab1-2d86-4f71-995a-5994d6b2262e-utilities\") pod \"redhat-operators-tp28w\" (UID: \"c1044ab1-2d86-4f71-995a-5994d6b2262e\") " pod="openshift-marketplace/redhat-operators-tp28w" Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.248860 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbkxp\" (UniqueName: \"kubernetes.io/projected/c1044ab1-2d86-4f71-995a-5994d6b2262e-kube-api-access-dbkxp\") pod \"redhat-operators-tp28w\" (UID: \"c1044ab1-2d86-4f71-995a-5994d6b2262e\") " pod="openshift-marketplace/redhat-operators-tp28w" Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.249291 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1044ab1-2d86-4f71-995a-5994d6b2262e-utilities\") pod \"redhat-operators-tp28w\" (UID: \"c1044ab1-2d86-4f71-995a-5994d6b2262e\") " pod="openshift-marketplace/redhat-operators-tp28w" Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.270347 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbkxp\" (UniqueName: \"kubernetes.io/projected/c1044ab1-2d86-4f71-995a-5994d6b2262e-kube-api-access-dbkxp\") pod \"redhat-operators-tp28w\" (UID: \"c1044ab1-2d86-4f71-995a-5994d6b2262e\") " pod="openshift-marketplace/redhat-operators-tp28w" Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.365876 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1044ab1-2d86-4f71-995a-5994d6b2262e-catalog-content\") pod \"redhat-operators-tp28w\" (UID: \"c1044ab1-2d86-4f71-995a-5994d6b2262e\") " pod="openshift-marketplace/redhat-operators-tp28w" Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.618035 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp28w" Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.623463 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f07ca5d9-b793-471e-9d88-3f48063053e3","Type":"ContainerStarted","Data":"2e23fef2f4ffb96e798e9e0663bd6db79ba00ea125c640c5ff5ddaafcc530193"} Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.624713 4925 generic.go:334] "Generic (PLEG): container finished" podID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" containerID="4a7b1b5c703be17e1c14eeaa3d844f6bc932cb3ab75539b12ec1def1b34ca179" exitCode=0 Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.624928 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26nhd" event={"ID":"560d449d-bbfb-4f5b-a14f-4a26175a20d2","Type":"ContainerDied","Data":"4a7b1b5c703be17e1c14eeaa3d844f6bc932cb3ab75539b12ec1def1b34ca179"} Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.629249 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gmldm" event={"ID":"b01fd158-f4e2-4ec8-953b-12dae9c49dd7","Type":"ContainerStarted","Data":"4c352cb4fd3c210467067bbe055d853399e9a729a1c47ef7539a4e73ba68620f"} Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.689541 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-nnqww" Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.866863 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:59:45 crc kubenswrapper[4925]: [-]has-synced failed: reason withheld Feb 02 10:59:45 crc kubenswrapper[4925]: [+]process-running ok Feb 02 10:59:45 crc kubenswrapper[4925]: healthz check failed Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.866954 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.894618 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:59:45 crc kubenswrapper[4925]: I0202 10:59:45.949673 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tp28w"] Feb 02 10:59:45 crc kubenswrapper[4925]: W0202 10:59:45.963283 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1044ab1_2d86_4f71_995a_5994d6b2262e.slice/crio-aaa863cc6846ba8861a7f10a069f2fa9b6848c5a246c4cbcfded55a88858a040 WatchSource:0}: Error finding container aaa863cc6846ba8861a7f10a069f2fa9b6848c5a246c4cbcfded55a88858a040: Status 404 returned error can't find the container with id aaa863cc6846ba8861a7f10a069f2fa9b6848c5a246c4cbcfded55a88858a040 Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.059588 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/16df83ed-b915-4a95-8753-e25975e1d2ea-kubelet-dir\") pod \"16df83ed-b915-4a95-8753-e25975e1d2ea\" (UID: \"16df83ed-b915-4a95-8753-e25975e1d2ea\") " Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.059709 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16df83ed-b915-4a95-8753-e25975e1d2ea-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "16df83ed-b915-4a95-8753-e25975e1d2ea" (UID: "16df83ed-b915-4a95-8753-e25975e1d2ea"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.059933 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16df83ed-b915-4a95-8753-e25975e1d2ea-kube-api-access\") pod \"16df83ed-b915-4a95-8753-e25975e1d2ea\" (UID: \"16df83ed-b915-4a95-8753-e25975e1d2ea\") " Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.060225 4925 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/16df83ed-b915-4a95-8753-e25975e1d2ea-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.065981 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16df83ed-b915-4a95-8753-e25975e1d2ea-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "16df83ed-b915-4a95-8753-e25975e1d2ea" (UID: "16df83ed-b915-4a95-8753-e25975e1d2ea"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.171681 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16df83ed-b915-4a95-8753-e25975e1d2ea-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.635568 4925 generic.go:334] "Generic (PLEG): container finished" podID="b01fd158-f4e2-4ec8-953b-12dae9c49dd7" containerID="487b68362beb854baee62a204f53cff077d9de711edea8ce1adff2a2ebb95c97" exitCode=0 Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.635737 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gmldm" event={"ID":"b01fd158-f4e2-4ec8-953b-12dae9c49dd7","Type":"ContainerDied","Data":"487b68362beb854baee62a204f53cff077d9de711edea8ce1adff2a2ebb95c97"} Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.637053 4925 generic.go:334] "Generic (PLEG): container finished" podID="c1044ab1-2d86-4f71-995a-5994d6b2262e" containerID="b87ec82220311a126e79f364a4d3b3a7faaf36968c0601a4e8e1a56732438e46" exitCode=0 Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.637124 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp28w" event={"ID":"c1044ab1-2d86-4f71-995a-5994d6b2262e","Type":"ContainerDied","Data":"b87ec82220311a126e79f364a4d3b3a7faaf36968c0601a4e8e1a56732438e46"} Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.637144 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp28w" event={"ID":"c1044ab1-2d86-4f71-995a-5994d6b2262e","Type":"ContainerStarted","Data":"aaa863cc6846ba8861a7f10a069f2fa9b6848c5a246c4cbcfded55a88858a040"} Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.640170 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"16df83ed-b915-4a95-8753-e25975e1d2ea","Type":"ContainerDied","Data":"188f841a5f45ec2558d15ffa8e34b064882411b24f40c0b5098c6c701b394ab5"} Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.640259 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="188f841a5f45ec2558d15ffa8e34b064882411b24f40c0b5098c6c701b394ab5" Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.640271 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.641878 4925 generic.go:334] "Generic (PLEG): container finished" podID="f07ca5d9-b793-471e-9d88-3f48063053e3" containerID="c505230dbcc1c982047aa5a6df141ee077f63870b846ab90c64727957a30e828" exitCode=0 Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.641904 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f07ca5d9-b793-471e-9d88-3f48063053e3","Type":"ContainerDied","Data":"c505230dbcc1c982047aa5a6df141ee077f63870b846ab90c64727957a30e828"} Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.864671 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:59:46 crc kubenswrapper[4925]: [-]has-synced failed: reason withheld Feb 02 10:59:46 crc kubenswrapper[4925]: [+]process-running ok Feb 02 10:59:46 crc kubenswrapper[4925]: healthz check failed Feb 02 10:59:46 crc kubenswrapper[4925]: I0202 10:59:46.864747 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:47 crc kubenswrapper[4925]: I0202 10:59:47.427403 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:47 crc kubenswrapper[4925]: I0202 10:59:47.431905 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-68qpw" Feb 02 10:59:47 crc kubenswrapper[4925]: I0202 10:59:47.856456 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:59:47 crc kubenswrapper[4925]: I0202 10:59:47.865693 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:59:47 crc kubenswrapper[4925]: [-]has-synced failed: reason withheld Feb 02 10:59:47 crc kubenswrapper[4925]: [+]process-running ok Feb 02 10:59:47 crc kubenswrapper[4925]: healthz check failed Feb 02 10:59:47 crc kubenswrapper[4925]: I0202 10:59:47.866112 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:47 crc kubenswrapper[4925]: I0202 10:59:47.996439 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f07ca5d9-b793-471e-9d88-3f48063053e3-kube-api-access\") pod \"f07ca5d9-b793-471e-9d88-3f48063053e3\" (UID: \"f07ca5d9-b793-471e-9d88-3f48063053e3\") " Feb 02 10:59:47 crc kubenswrapper[4925]: I0202 10:59:47.997260 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f07ca5d9-b793-471e-9d88-3f48063053e3-kubelet-dir\") pod \"f07ca5d9-b793-471e-9d88-3f48063053e3\" (UID: \"f07ca5d9-b793-471e-9d88-3f48063053e3\") " Feb 02 10:59:47 crc kubenswrapper[4925]: I0202 10:59:47.997369 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f07ca5d9-b793-471e-9d88-3f48063053e3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f07ca5d9-b793-471e-9d88-3f48063053e3" (UID: "f07ca5d9-b793-471e-9d88-3f48063053e3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:59:47 crc kubenswrapper[4925]: I0202 10:59:47.998250 4925 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f07ca5d9-b793-471e-9d88-3f48063053e3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:48 crc kubenswrapper[4925]: I0202 10:59:48.003957 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f07ca5d9-b793-471e-9d88-3f48063053e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f07ca5d9-b793-471e-9d88-3f48063053e3" (UID: "f07ca5d9-b793-471e-9d88-3f48063053e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:48 crc kubenswrapper[4925]: I0202 10:59:48.099598 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f07ca5d9-b793-471e-9d88-3f48063053e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:48 crc kubenswrapper[4925]: I0202 10:59:48.653711 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f07ca5d9-b793-471e-9d88-3f48063053e3","Type":"ContainerDied","Data":"2e23fef2f4ffb96e798e9e0663bd6db79ba00ea125c640c5ff5ddaafcc530193"} Feb 02 10:59:48 crc kubenswrapper[4925]: I0202 10:59:48.653767 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e23fef2f4ffb96e798e9e0663bd6db79ba00ea125c640c5ff5ddaafcc530193" Feb 02 10:59:48 crc kubenswrapper[4925]: I0202 10:59:48.653893 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:59:48 crc kubenswrapper[4925]: I0202 10:59:48.864334 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:59:48 crc kubenswrapper[4925]: [-]has-synced failed: reason withheld Feb 02 10:59:48 crc kubenswrapper[4925]: [+]process-running ok Feb 02 10:59:48 crc kubenswrapper[4925]: healthz check failed Feb 02 10:59:48 crc kubenswrapper[4925]: I0202 10:59:48.864418 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:49 crc kubenswrapper[4925]: I0202 10:59:49.864226 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:59:49 crc kubenswrapper[4925]: [-]has-synced failed: reason withheld Feb 02 10:59:49 crc kubenswrapper[4925]: [+]process-running ok Feb 02 10:59:49 crc kubenswrapper[4925]: healthz check failed Feb 02 10:59:49 crc kubenswrapper[4925]: I0202 10:59:49.864276 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:50 crc kubenswrapper[4925]: I0202 10:59:50.864339 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:59:50 crc kubenswrapper[4925]: [-]has-synced failed: reason withheld Feb 02 10:59:50 crc kubenswrapper[4925]: [+]process-running ok Feb 02 10:59:50 crc kubenswrapper[4925]: healthz check failed Feb 02 10:59:50 crc kubenswrapper[4925]: I0202 10:59:50.864670 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:51 crc kubenswrapper[4925]: I0202 10:59:51.864414 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:59:51 crc kubenswrapper[4925]: [-]has-synced failed: reason withheld Feb 02 10:59:51 crc kubenswrapper[4925]: [+]process-running ok Feb 02 10:59:51 crc kubenswrapper[4925]: healthz check failed Feb 02 10:59:51 crc kubenswrapper[4925]: I0202 10:59:51.864492 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:52 crc kubenswrapper[4925]: I0202 10:59:52.563335 4925 patch_prober.go:28] interesting pod/console-f9d7485db-nzwbr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 02 10:59:52 crc kubenswrapper[4925]: I0202 10:59:52.563382 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-nzwbr" podUID="81af45ef-2049-4155-9c0b-ae722e6b8c8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 02 10:59:52 crc kubenswrapper[4925]: I0202 10:59:52.563427 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 10:59:52 crc kubenswrapper[4925]: I0202 10:59:52.640324 4925 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx9mv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 02 10:59:52 crc kubenswrapper[4925]: I0202 10:59:52.640383 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-qx9mv" podUID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 02 10:59:52 crc kubenswrapper[4925]: I0202 10:59:52.641013 4925 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx9mv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 02 10:59:52 crc kubenswrapper[4925]: I0202 10:59:52.641042 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qx9mv" podUID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 02 10:59:52 crc kubenswrapper[4925]: I0202 10:59:52.864701 4925 patch_prober.go:28] interesting pod/router-default-5444994796-q87qm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:59:52 crc kubenswrapper[4925]: [-]has-synced failed: reason withheld Feb 02 10:59:52 crc kubenswrapper[4925]: [+]process-running ok Feb 02 10:59:52 crc kubenswrapper[4925]: healthz check failed Feb 02 10:59:52 crc kubenswrapper[4925]: I0202 10:59:52.864767 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q87qm" podUID="39d5f083-20be-4cb1-9c72-d7a52d54a578" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:59:53 crc kubenswrapper[4925]: I0202 10:59:53.865335 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:53 crc kubenswrapper[4925]: I0202 10:59:53.868105 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-q87qm" Feb 02 10:59:54 crc kubenswrapper[4925]: I0202 10:59:54.228826 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs\") pod \"network-metrics-daemon-hjf4s\" (UID: \"39f183d5-0612-452e-b762-c841df3a306d\") " pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:54 crc kubenswrapper[4925]: I0202 10:59:54.234114 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f183d5-0612-452e-b762-c841df3a306d-metrics-certs\") pod \"network-metrics-daemon-hjf4s\" (UID: \"39f183d5-0612-452e-b762-c841df3a306d\") " pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:54 crc kubenswrapper[4925]: I0202 10:59:54.433828 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hjf4s" Feb 02 10:59:58 crc kubenswrapper[4925]: I0202 10:59:58.653432 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mpgcb"] Feb 02 10:59:58 crc kubenswrapper[4925]: I0202 10:59:58.653912 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" podUID="124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b" containerName="controller-manager" containerID="cri-o://fb2b5547398b092739c59e0fe7485886652823344eba8c3729cc87ae46809d0f" gracePeriod=30 Feb 02 10:59:58 crc kubenswrapper[4925]: I0202 10:59:58.670726 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk"] Feb 02 10:59:58 crc kubenswrapper[4925]: I0202 10:59:58.670936 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" podUID="45405c2c-780c-4190-8cad-466ecfd84d2d" containerName="route-controller-manager" containerID="cri-o://3dafaac12b0282e867fb9421eb215c2f10db3cc404b77ef3f8f080c3d1898652" gracePeriod=30 Feb 02 10:59:59 crc kubenswrapper[4925]: I0202 10:59:59.719460 4925 generic.go:334] "Generic (PLEG): container finished" podID="124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b" containerID="fb2b5547398b092739c59e0fe7485886652823344eba8c3729cc87ae46809d0f" exitCode=0 Feb 02 10:59:59 crc kubenswrapper[4925]: I0202 10:59:59.719525 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" event={"ID":"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b","Type":"ContainerDied","Data":"fb2b5547398b092739c59e0fe7485886652823344eba8c3729cc87ae46809d0f"} Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.129759 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw"] Feb 02 11:00:00 crc kubenswrapper[4925]: E0202 11:00:00.130267 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f07ca5d9-b793-471e-9d88-3f48063053e3" containerName="pruner" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.130278 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07ca5d9-b793-471e-9d88-3f48063053e3" containerName="pruner" Feb 02 11:00:00 crc kubenswrapper[4925]: E0202 11:00:00.130297 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16df83ed-b915-4a95-8753-e25975e1d2ea" containerName="pruner" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.130303 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="16df83ed-b915-4a95-8753-e25975e1d2ea" containerName="pruner" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.130390 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="f07ca5d9-b793-471e-9d88-3f48063053e3" containerName="pruner" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.130404 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="16df83ed-b915-4a95-8753-e25975e1d2ea" containerName="pruner" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.130771 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.135578 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.135653 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.142506 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw"] Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.313226 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6lzf\" (UniqueName: \"kubernetes.io/projected/e623d6f6-1bf2-43f4-a280-147617dbf9ef-kube-api-access-h6lzf\") pod \"collect-profiles-29500500-696pw\" (UID: \"e623d6f6-1bf2-43f4-a280-147617dbf9ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.313300 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e623d6f6-1bf2-43f4-a280-147617dbf9ef-secret-volume\") pod \"collect-profiles-29500500-696pw\" (UID: \"e623d6f6-1bf2-43f4-a280-147617dbf9ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.313399 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e623d6f6-1bf2-43f4-a280-147617dbf9ef-config-volume\") pod \"collect-profiles-29500500-696pw\" (UID: \"e623d6f6-1bf2-43f4-a280-147617dbf9ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.414359 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6lzf\" (UniqueName: \"kubernetes.io/projected/e623d6f6-1bf2-43f4-a280-147617dbf9ef-kube-api-access-h6lzf\") pod \"collect-profiles-29500500-696pw\" (UID: \"e623d6f6-1bf2-43f4-a280-147617dbf9ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.414410 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e623d6f6-1bf2-43f4-a280-147617dbf9ef-secret-volume\") pod \"collect-profiles-29500500-696pw\" (UID: \"e623d6f6-1bf2-43f4-a280-147617dbf9ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.414482 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e623d6f6-1bf2-43f4-a280-147617dbf9ef-config-volume\") pod \"collect-profiles-29500500-696pw\" (UID: \"e623d6f6-1bf2-43f4-a280-147617dbf9ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.415598 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e623d6f6-1bf2-43f4-a280-147617dbf9ef-config-volume\") pod \"collect-profiles-29500500-696pw\" (UID: \"e623d6f6-1bf2-43f4-a280-147617dbf9ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.430967 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6lzf\" (UniqueName: \"kubernetes.io/projected/e623d6f6-1bf2-43f4-a280-147617dbf9ef-kube-api-access-h6lzf\") pod \"collect-profiles-29500500-696pw\" (UID: \"e623d6f6-1bf2-43f4-a280-147617dbf9ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.438553 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e623d6f6-1bf2-43f4-a280-147617dbf9ef-secret-volume\") pod \"collect-profiles-29500500-696pw\" (UID: \"e623d6f6-1bf2-43f4-a280-147617dbf9ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.454885 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw" Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.726963 4925 generic.go:334] "Generic (PLEG): container finished" podID="45405c2c-780c-4190-8cad-466ecfd84d2d" containerID="3dafaac12b0282e867fb9421eb215c2f10db3cc404b77ef3f8f080c3d1898652" exitCode=0 Feb 02 11:00:00 crc kubenswrapper[4925]: I0202 11:00:00.727005 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" event={"ID":"45405c2c-780c-4190-8cad-466ecfd84d2d","Type":"ContainerDied","Data":"3dafaac12b0282e867fb9421eb215c2f10db3cc404b77ef3f8f080c3d1898652"} Feb 02 11:00:02 crc kubenswrapper[4925]: I0202 11:00:02.305961 4925 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-bf6lk container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 02 11:00:02 crc kubenswrapper[4925]: I0202 11:00:02.306474 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" podUID="45405c2c-780c-4190-8cad-466ecfd84d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 02 11:00:02 crc kubenswrapper[4925]: I0202 11:00:02.321659 4925 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mpgcb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 02 11:00:02 crc kubenswrapper[4925]: I0202 11:00:02.321725 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" podUID="124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 02 11:00:02 crc kubenswrapper[4925]: I0202 11:00:02.569056 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 11:00:02 crc kubenswrapper[4925]: I0202 11:00:02.573848 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 11:00:02 crc kubenswrapper[4925]: I0202 11:00:02.639700 4925 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx9mv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 02 11:00:02 crc kubenswrapper[4925]: I0202 11:00:02.639759 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qx9mv" podUID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 02 11:00:02 crc kubenswrapper[4925]: I0202 11:00:02.639786 4925 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx9mv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 02 11:00:02 crc kubenswrapper[4925]: I0202 11:00:02.639846 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-qx9mv" podUID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 02 11:00:02 crc kubenswrapper[4925]: I0202 11:00:02.641197 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-qx9mv" Feb 02 11:00:02 crc kubenswrapper[4925]: I0202 11:00:02.642033 4925 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx9mv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 02 11:00:02 crc kubenswrapper[4925]: I0202 11:00:02.642070 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qx9mv" podUID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 02 11:00:02 crc kubenswrapper[4925]: I0202 11:00:02.642729 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"e68bf8e6f536c97947cbda8e5a7ef730f57fec1440f8065be063b54a6ca26563"} pod="openshift-console/downloads-7954f5f757-qx9mv" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 02 11:00:02 crc kubenswrapper[4925]: I0202 11:00:02.642840 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-qx9mv" podUID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerName="download-server" containerID="cri-o://e68bf8e6f536c97947cbda8e5a7ef730f57fec1440f8065be063b54a6ca26563" gracePeriod=2 Feb 02 11:00:02 crc kubenswrapper[4925]: I0202 11:00:02.781672 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 11:00:03 crc kubenswrapper[4925]: E0202 11:00:03.053827 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 02 11:00:03 crc kubenswrapper[4925]: E0202 11:00:03.054058 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdjsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xf5qm_openshift-marketplace(51968f99-bd7d-4958-bb6f-ba8035b2e637): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 11:00:03 crc kubenswrapper[4925]: E0202 11:00:03.055246 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xf5qm" podUID="51968f99-bd7d-4958-bb6f-ba8035b2e637" Feb 02 11:00:03 crc kubenswrapper[4925]: I0202 11:00:03.743324 4925 generic.go:334] "Generic (PLEG): container finished" podID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerID="e68bf8e6f536c97947cbda8e5a7ef730f57fec1440f8065be063b54a6ca26563" exitCode=0 Feb 02 11:00:03 crc kubenswrapper[4925]: I0202 11:00:03.743376 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qx9mv" event={"ID":"2c1d6c8a-41c7-48a0-853c-d1df60efb422","Type":"ContainerDied","Data":"e68bf8e6f536c97947cbda8e5a7ef730f57fec1440f8065be063b54a6ca26563"} Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.166406 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.215809 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58754dddbf-qbns4"] Feb 02 11:00:09 crc kubenswrapper[4925]: E0202 11:00:09.216255 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b" containerName="controller-manager" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.216321 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b" containerName="controller-manager" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.216626 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b" containerName="controller-manager" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.217542 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.236217 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58754dddbf-qbns4"] Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.281993 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-client-ca\") pod \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.282160 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-config\") pod \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.282665 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-serving-cert\") pod \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.282910 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-client-ca" (OuterVolumeSpecName: "client-ca") pod "124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b" (UID: "124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.282982 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-config" (OuterVolumeSpecName: "config") pod "124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b" (UID: "124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.283026 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-proxy-ca-bundles\") pod \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.283110 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2m5l\" (UniqueName: \"kubernetes.io/projected/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-kube-api-access-n2m5l\") pod \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\" (UID: \"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b\") " Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.283406 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-client-ca\") pod \"controller-manager-58754dddbf-qbns4\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.283477 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sls9j\" (UniqueName: \"kubernetes.io/projected/179a1459-7d61-4dc2-a510-6bae3cb4d24a-kube-api-access-sls9j\") pod \"controller-manager-58754dddbf-qbns4\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.283563 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-config\") pod \"controller-manager-58754dddbf-qbns4\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.283631 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b" (UID: "124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.283742 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/179a1459-7d61-4dc2-a510-6bae3cb4d24a-serving-cert\") pod \"controller-manager-58754dddbf-qbns4\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.283835 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-proxy-ca-bundles\") pod \"controller-manager-58754dddbf-qbns4\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.284161 4925 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.284251 4925 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.284275 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.288955 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-kube-api-access-n2m5l" (OuterVolumeSpecName: "kube-api-access-n2m5l") pod "124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b" (UID: "124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b"). InnerVolumeSpecName "kube-api-access-n2m5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.289039 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b" (UID: "124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.386118 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/179a1459-7d61-4dc2-a510-6bae3cb4d24a-serving-cert\") pod \"controller-manager-58754dddbf-qbns4\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.386180 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-proxy-ca-bundles\") pod \"controller-manager-58754dddbf-qbns4\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.386213 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-client-ca\") pod \"controller-manager-58754dddbf-qbns4\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.386246 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sls9j\" (UniqueName: \"kubernetes.io/projected/179a1459-7d61-4dc2-a510-6bae3cb4d24a-kube-api-access-sls9j\") pod \"controller-manager-58754dddbf-qbns4\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.386278 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-config\") pod \"controller-manager-58754dddbf-qbns4\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.386314 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.386324 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2m5l\" (UniqueName: \"kubernetes.io/projected/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b-kube-api-access-n2m5l\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.387664 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-config\") pod \"controller-manager-58754dddbf-qbns4\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.388799 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-client-ca\") pod \"controller-manager-58754dddbf-qbns4\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.390676 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-proxy-ca-bundles\") pod \"controller-manager-58754dddbf-qbns4\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.392107 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/179a1459-7d61-4dc2-a510-6bae3cb4d24a-serving-cert\") pod \"controller-manager-58754dddbf-qbns4\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.415201 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sls9j\" (UniqueName: \"kubernetes.io/projected/179a1459-7d61-4dc2-a510-6bae3cb4d24a-kube-api-access-sls9j\") pod \"controller-manager-58754dddbf-qbns4\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.546989 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.778532 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" event={"ID":"124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b","Type":"ContainerDied","Data":"2a98f0c0a44e75c3a22ffe0c5ca60cc1812b5711b25507d203a96bba893193af"} Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.778650 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mpgcb" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.778706 4925 scope.go:117] "RemoveContainer" containerID="fb2b5547398b092739c59e0fe7485886652823344eba8c3729cc87ae46809d0f" Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.819042 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mpgcb"] Feb 02 11:00:09 crc kubenswrapper[4925]: I0202 11:00:09.824447 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mpgcb"] Feb 02 11:00:10 crc kubenswrapper[4925]: I0202 11:00:10.673962 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b" path="/var/lib/kubelet/pods/124e0efd-2cce-4f29-bcc3-3d1a6fd5a62b/volumes" Feb 02 11:00:12 crc kubenswrapper[4925]: I0202 11:00:12.640839 4925 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx9mv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 02 11:00:12 crc kubenswrapper[4925]: I0202 11:00:12.641328 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qx9mv" podUID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 02 11:00:13 crc kubenswrapper[4925]: I0202 11:00:13.306595 4925 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-bf6lk container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 11:00:13 crc kubenswrapper[4925]: I0202 11:00:13.306661 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" podUID="45405c2c-780c-4190-8cad-466ecfd84d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:13 crc kubenswrapper[4925]: I0202 11:00:13.398770 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:00:13 crc kubenswrapper[4925]: I0202 11:00:13.398862 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:00:13 crc kubenswrapper[4925]: I0202 11:00:13.889036 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bj99s" Feb 02 11:00:15 crc kubenswrapper[4925]: E0202 11:00:15.284988 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 11:00:15 crc kubenswrapper[4925]: E0202 11:00:15.285348 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wclhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gcc4h_openshift-marketplace(0f7aa95c-3861-48ab-a30f-0301aad169d7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 11:00:15 crc kubenswrapper[4925]: E0202 11:00:15.286665 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gcc4h" podUID="0f7aa95c-3861-48ab-a30f-0301aad169d7" Feb 02 11:00:18 crc kubenswrapper[4925]: E0202 11:00:18.055329 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 11:00:18 crc kubenswrapper[4925]: E0202 11:00:18.055889 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56pv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qncpv_openshift-marketplace(bf7432d2-d4a9-4fa9-8570-e76d21c8e771): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 11:00:18 crc kubenswrapper[4925]: E0202 11:00:18.057114 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qncpv" podUID="bf7432d2-d4a9-4fa9-8570-e76d21c8e771" Feb 02 11:00:18 crc kubenswrapper[4925]: I0202 11:00:18.619458 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58754dddbf-qbns4"] Feb 02 11:00:19 crc kubenswrapper[4925]: E0202 11:00:19.216505 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gcc4h" podUID="0f7aa95c-3861-48ab-a30f-0301aad169d7" Feb 02 11:00:19 crc kubenswrapper[4925]: E0202 11:00:19.217333 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qncpv" podUID="bf7432d2-d4a9-4fa9-8570-e76d21c8e771" Feb 02 11:00:19 crc kubenswrapper[4925]: I0202 11:00:19.789807 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 11:00:20 crc kubenswrapper[4925]: I0202 11:00:20.330897 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 11:00:20 crc kubenswrapper[4925]: I0202 11:00:20.332507 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 11:00:20 crc kubenswrapper[4925]: I0202 11:00:20.336145 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 11:00:20 crc kubenswrapper[4925]: I0202 11:00:20.336535 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 11:00:20 crc kubenswrapper[4925]: I0202 11:00:20.344675 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 11:00:20 crc kubenswrapper[4925]: I0202 11:00:20.405320 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9577233-8642-44a5-98f5-0538ee3f57cd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b9577233-8642-44a5-98f5-0538ee3f57cd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 11:00:20 crc kubenswrapper[4925]: I0202 11:00:20.405584 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9577233-8642-44a5-98f5-0538ee3f57cd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b9577233-8642-44a5-98f5-0538ee3f57cd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 11:00:20 crc kubenswrapper[4925]: I0202 11:00:20.507349 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9577233-8642-44a5-98f5-0538ee3f57cd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b9577233-8642-44a5-98f5-0538ee3f57cd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 11:00:20 crc kubenswrapper[4925]: I0202 11:00:20.507499 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9577233-8642-44a5-98f5-0538ee3f57cd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b9577233-8642-44a5-98f5-0538ee3f57cd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 11:00:20 crc kubenswrapper[4925]: I0202 11:00:20.507628 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9577233-8642-44a5-98f5-0538ee3f57cd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b9577233-8642-44a5-98f5-0538ee3f57cd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 11:00:20 crc kubenswrapper[4925]: I0202 11:00:20.531278 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9577233-8642-44a5-98f5-0538ee3f57cd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b9577233-8642-44a5-98f5-0538ee3f57cd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 11:00:20 crc kubenswrapper[4925]: I0202 11:00:20.660308 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 11:00:22 crc kubenswrapper[4925]: I0202 11:00:22.640351 4925 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx9mv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 02 11:00:22 crc kubenswrapper[4925]: I0202 11:00:22.641205 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qx9mv" podUID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 02 11:00:23 crc kubenswrapper[4925]: I0202 11:00:23.306606 4925 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-bf6lk container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 11:00:23 crc kubenswrapper[4925]: I0202 11:00:23.307063 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" podUID="45405c2c-780c-4190-8cad-466ecfd84d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:23 crc kubenswrapper[4925]: E0202 11:00:23.581946 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 02 11:00:23 crc kubenswrapper[4925]: E0202 11:00:23.582337 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fr9gv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-26nhd_openshift-marketplace(560d449d-bbfb-4f5b-a14f-4a26175a20d2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 11:00:23 crc kubenswrapper[4925]: E0202 11:00:23.583507 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-26nhd" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" Feb 02 11:00:24 crc kubenswrapper[4925]: I0202 11:00:24.322955 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 11:00:24 crc kubenswrapper[4925]: I0202 11:00:24.323640 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 11:00:24 crc kubenswrapper[4925]: I0202 11:00:24.345856 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 11:00:24 crc kubenswrapper[4925]: I0202 11:00:24.464376 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1472df2-041e-456c-b47a-fd15943af977-kube-api-access\") pod \"installer-9-crc\" (UID: \"b1472df2-041e-456c-b47a-fd15943af977\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 11:00:24 crc kubenswrapper[4925]: I0202 11:00:24.464479 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1472df2-041e-456c-b47a-fd15943af977-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b1472df2-041e-456c-b47a-fd15943af977\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 11:00:24 crc kubenswrapper[4925]: I0202 11:00:24.464518 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1472df2-041e-456c-b47a-fd15943af977-var-lock\") pod \"installer-9-crc\" (UID: \"b1472df2-041e-456c-b47a-fd15943af977\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 11:00:24 crc kubenswrapper[4925]: I0202 11:00:24.566201 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1472df2-041e-456c-b47a-fd15943af977-kube-api-access\") pod \"installer-9-crc\" (UID: \"b1472df2-041e-456c-b47a-fd15943af977\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 11:00:24 crc kubenswrapper[4925]: I0202 11:00:24.566292 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1472df2-041e-456c-b47a-fd15943af977-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b1472df2-041e-456c-b47a-fd15943af977\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 11:00:24 crc kubenswrapper[4925]: I0202 11:00:24.566323 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1472df2-041e-456c-b47a-fd15943af977-var-lock\") pod \"installer-9-crc\" (UID: \"b1472df2-041e-456c-b47a-fd15943af977\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 11:00:24 crc kubenswrapper[4925]: I0202 11:00:24.566381 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1472df2-041e-456c-b47a-fd15943af977-var-lock\") pod \"installer-9-crc\" (UID: \"b1472df2-041e-456c-b47a-fd15943af977\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 11:00:24 crc kubenswrapper[4925]: I0202 11:00:24.566415 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1472df2-041e-456c-b47a-fd15943af977-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b1472df2-041e-456c-b47a-fd15943af977\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 11:00:24 crc kubenswrapper[4925]: I0202 11:00:24.600907 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1472df2-041e-456c-b47a-fd15943af977-kube-api-access\") pod \"installer-9-crc\" (UID: \"b1472df2-041e-456c-b47a-fd15943af977\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 11:00:24 crc kubenswrapper[4925]: I0202 11:00:24.656681 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 11:00:25 crc kubenswrapper[4925]: E0202 11:00:25.095210 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-26nhd" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" Feb 02 11:00:25 crc kubenswrapper[4925]: E0202 11:00:25.128494 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 02 11:00:25 crc kubenswrapper[4925]: E0202 11:00:25.128718 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lsdkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-d5f7d_openshift-marketplace(c985f150-ec7d-4175-99a1-8fb775b7d7d9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 11:00:25 crc kubenswrapper[4925]: E0202 11:00:25.130118 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-d5f7d" podUID="c985f150-ec7d-4175-99a1-8fb775b7d7d9" Feb 02 11:00:25 crc kubenswrapper[4925]: E0202 11:00:25.136759 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 02 11:00:25 crc kubenswrapper[4925]: E0202 11:00:25.136973 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6tcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pn6rl_openshift-marketplace(46c2cde5-148b-44c2-821a-a470122f1167): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 11:00:25 crc kubenswrapper[4925]: E0202 11:00:25.138276 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pn6rl" podUID="46c2cde5-148b-44c2-821a-a470122f1167" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.168656 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.210208 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws"] Feb 02 11:00:25 crc kubenswrapper[4925]: E0202 11:00:25.210566 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45405c2c-780c-4190-8cad-466ecfd84d2d" containerName="route-controller-manager" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.210603 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="45405c2c-780c-4190-8cad-466ecfd84d2d" containerName="route-controller-manager" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.210752 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="45405c2c-780c-4190-8cad-466ecfd84d2d" containerName="route-controller-manager" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.211281 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.212460 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws"] Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.275105 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvkv7\" (UniqueName: \"kubernetes.io/projected/45405c2c-780c-4190-8cad-466ecfd84d2d-kube-api-access-cvkv7\") pod \"45405c2c-780c-4190-8cad-466ecfd84d2d\" (UID: \"45405c2c-780c-4190-8cad-466ecfd84d2d\") " Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.275240 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45405c2c-780c-4190-8cad-466ecfd84d2d-client-ca\") pod \"45405c2c-780c-4190-8cad-466ecfd84d2d\" (UID: \"45405c2c-780c-4190-8cad-466ecfd84d2d\") " Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.275270 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45405c2c-780c-4190-8cad-466ecfd84d2d-serving-cert\") pod \"45405c2c-780c-4190-8cad-466ecfd84d2d\" (UID: \"45405c2c-780c-4190-8cad-466ecfd84d2d\") " Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.275428 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45405c2c-780c-4190-8cad-466ecfd84d2d-config\") pod \"45405c2c-780c-4190-8cad-466ecfd84d2d\" (UID: \"45405c2c-780c-4190-8cad-466ecfd84d2d\") " Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.275650 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d5ee896-9246-434b-a043-ab677266af4e-serving-cert\") pod \"route-controller-manager-59df95688f-pxvws\" (UID: \"3d5ee896-9246-434b-a043-ab677266af4e\") " pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.275703 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d5ee896-9246-434b-a043-ab677266af4e-client-ca\") pod \"route-controller-manager-59df95688f-pxvws\" (UID: \"3d5ee896-9246-434b-a043-ab677266af4e\") " pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.275821 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d5ee896-9246-434b-a043-ab677266af4e-config\") pod \"route-controller-manager-59df95688f-pxvws\" (UID: \"3d5ee896-9246-434b-a043-ab677266af4e\") " pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.275912 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcskc\" (UniqueName: \"kubernetes.io/projected/3d5ee896-9246-434b-a043-ab677266af4e-kube-api-access-kcskc\") pod \"route-controller-manager-59df95688f-pxvws\" (UID: \"3d5ee896-9246-434b-a043-ab677266af4e\") " pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.275932 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45405c2c-780c-4190-8cad-466ecfd84d2d-client-ca" (OuterVolumeSpecName: "client-ca") pod "45405c2c-780c-4190-8cad-466ecfd84d2d" (UID: "45405c2c-780c-4190-8cad-466ecfd84d2d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.276142 4925 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45405c2c-780c-4190-8cad-466ecfd84d2d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.276428 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45405c2c-780c-4190-8cad-466ecfd84d2d-config" (OuterVolumeSpecName: "config") pod "45405c2c-780c-4190-8cad-466ecfd84d2d" (UID: "45405c2c-780c-4190-8cad-466ecfd84d2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.280727 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45405c2c-780c-4190-8cad-466ecfd84d2d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "45405c2c-780c-4190-8cad-466ecfd84d2d" (UID: "45405c2c-780c-4190-8cad-466ecfd84d2d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.287582 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45405c2c-780c-4190-8cad-466ecfd84d2d-kube-api-access-cvkv7" (OuterVolumeSpecName: "kube-api-access-cvkv7") pod "45405c2c-780c-4190-8cad-466ecfd84d2d" (UID: "45405c2c-780c-4190-8cad-466ecfd84d2d"). InnerVolumeSpecName "kube-api-access-cvkv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.377127 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d5ee896-9246-434b-a043-ab677266af4e-client-ca\") pod \"route-controller-manager-59df95688f-pxvws\" (UID: \"3d5ee896-9246-434b-a043-ab677266af4e\") " pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.377194 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d5ee896-9246-434b-a043-ab677266af4e-serving-cert\") pod \"route-controller-manager-59df95688f-pxvws\" (UID: \"3d5ee896-9246-434b-a043-ab677266af4e\") " pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.377274 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d5ee896-9246-434b-a043-ab677266af4e-config\") pod \"route-controller-manager-59df95688f-pxvws\" (UID: \"3d5ee896-9246-434b-a043-ab677266af4e\") " pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.377335 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcskc\" (UniqueName: \"kubernetes.io/projected/3d5ee896-9246-434b-a043-ab677266af4e-kube-api-access-kcskc\") pod \"route-controller-manager-59df95688f-pxvws\" (UID: \"3d5ee896-9246-434b-a043-ab677266af4e\") " pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.377435 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45405c2c-780c-4190-8cad-466ecfd84d2d-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.377458 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvkv7\" (UniqueName: \"kubernetes.io/projected/45405c2c-780c-4190-8cad-466ecfd84d2d-kube-api-access-cvkv7\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.377478 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45405c2c-780c-4190-8cad-466ecfd84d2d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.379353 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d5ee896-9246-434b-a043-ab677266af4e-client-ca\") pod \"route-controller-manager-59df95688f-pxvws\" (UID: \"3d5ee896-9246-434b-a043-ab677266af4e\") " pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.381648 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d5ee896-9246-434b-a043-ab677266af4e-config\") pod \"route-controller-manager-59df95688f-pxvws\" (UID: \"3d5ee896-9246-434b-a043-ab677266af4e\") " pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.381812 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d5ee896-9246-434b-a043-ab677266af4e-serving-cert\") pod \"route-controller-manager-59df95688f-pxvws\" (UID: \"3d5ee896-9246-434b-a043-ab677266af4e\") " pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.404276 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcskc\" (UniqueName: \"kubernetes.io/projected/3d5ee896-9246-434b-a043-ab677266af4e-kube-api-access-kcskc\") pod \"route-controller-manager-59df95688f-pxvws\" (UID: \"3d5ee896-9246-434b-a043-ab677266af4e\") " pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.537726 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.894399 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" event={"ID":"45405c2c-780c-4190-8cad-466ecfd84d2d","Type":"ContainerDied","Data":"8068bf9024d08071cceaf918f898c741afe0a897b5c57dd66a2ece9b9873b4d5"} Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.894490 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk" Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.973805 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk"] Feb 02 11:00:25 crc kubenswrapper[4925]: I0202 11:00:25.981480 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bf6lk"] Feb 02 11:00:26 crc kubenswrapper[4925]: I0202 11:00:26.670995 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45405c2c-780c-4190-8cad-466ecfd84d2d" path="/var/lib/kubelet/pods/45405c2c-780c-4190-8cad-466ecfd84d2d/volumes" Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.017089 4925 scope.go:117] "RemoveContainer" containerID="3dafaac12b0282e867fb9421eb215c2f10db3cc404b77ef3f8f080c3d1898652" Feb 02 11:00:29 crc kubenswrapper[4925]: E0202 11:00:29.020752 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 02 11:00:29 crc kubenswrapper[4925]: E0202 11:00:29.020875 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lchfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gmldm_openshift-marketplace(b01fd158-f4e2-4ec8-953b-12dae9c49dd7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 11:00:29 crc kubenswrapper[4925]: E0202 11:00:29.022113 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-gmldm" podUID="b01fd158-f4e2-4ec8-953b-12dae9c49dd7" Feb 02 11:00:29 crc kubenswrapper[4925]: E0202 11:00:29.027883 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 02 11:00:29 crc kubenswrapper[4925]: E0202 11:00:29.028130 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dbkxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tp28w_openshift-marketplace(c1044ab1-2d86-4f71-995a-5994d6b2262e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 11:00:29 crc kubenswrapper[4925]: E0202 11:00:29.029607 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tp28w" podUID="c1044ab1-2d86-4f71-995a-5994d6b2262e" Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.453988 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw"] Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.459731 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58754dddbf-qbns4"] Feb 02 11:00:29 crc kubenswrapper[4925]: W0202 11:00:29.469989 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod179a1459_7d61_4dc2_a510_6bae3cb4d24a.slice/crio-483ba474a6715f9a0da0a132a57650695ebbbe69b8b3e2eece56d5878ba79d60 WatchSource:0}: Error finding container 483ba474a6715f9a0da0a132a57650695ebbbe69b8b3e2eece56d5878ba79d60: Status 404 returned error can't find the container with id 483ba474a6715f9a0da0a132a57650695ebbbe69b8b3e2eece56d5878ba79d60 Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.499724 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws"] Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.510674 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 11:00:29 crc kubenswrapper[4925]: W0202 11:00:29.526607 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb1472df2_041e_456c_b47a_fd15943af977.slice/crio-8a146995c608771935e8ff137367db3a0b181768c3c999f23f1cef8e34dc0933 WatchSource:0}: Error finding container 8a146995c608771935e8ff137367db3a0b181768c3c999f23f1cef8e34dc0933: Status 404 returned error can't find the container with id 8a146995c608771935e8ff137367db3a0b181768c3c999f23f1cef8e34dc0933 Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.574795 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.578592 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hjf4s"] Feb 02 11:00:29 crc kubenswrapper[4925]: W0202 11:00:29.580969 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39f183d5_0612_452e_b762_c841df3a306d.slice/crio-b07d1958241a8ecddc5340a48621e342cb021083941636f9dae444df0f41a7f5 WatchSource:0}: Error finding container b07d1958241a8ecddc5340a48621e342cb021083941636f9dae444df0f41a7f5: Status 404 returned error can't find the container with id b07d1958241a8ecddc5340a48621e342cb021083941636f9dae444df0f41a7f5 Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.919480 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" event={"ID":"39f183d5-0612-452e-b762-c841df3a306d","Type":"ContainerStarted","Data":"b07d1958241a8ecddc5340a48621e342cb021083941636f9dae444df0f41a7f5"} Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.920550 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b1472df2-041e-456c-b47a-fd15943af977","Type":"ContainerStarted","Data":"8a146995c608771935e8ff137367db3a0b181768c3c999f23f1cef8e34dc0933"} Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.923092 4925 generic.go:334] "Generic (PLEG): container finished" podID="51968f99-bd7d-4958-bb6f-ba8035b2e637" containerID="f3c5b65c4554f93c02ac03604f372afb150aeb1541852eb3444550ae110cb15e" exitCode=0 Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.923154 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xf5qm" event={"ID":"51968f99-bd7d-4958-bb6f-ba8035b2e637","Type":"ContainerDied","Data":"f3c5b65c4554f93c02ac03604f372afb150aeb1541852eb3444550ae110cb15e"} Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.926178 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" event={"ID":"3d5ee896-9246-434b-a043-ab677266af4e","Type":"ContainerStarted","Data":"9549ca1bc1c63d00fd20fa74640609c1369419ce90f3bf2c586dcc5b50aeb41f"} Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.926209 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" event={"ID":"3d5ee896-9246-434b-a043-ab677266af4e","Type":"ContainerStarted","Data":"63cd7db02f7563fe335e857edade3e922e9a0aa0ed2ce351d10906182abcb825"} Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.928657 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qx9mv" event={"ID":"2c1d6c8a-41c7-48a0-853c-d1df60efb422","Type":"ContainerStarted","Data":"1754236c2962252b53e1212d3e512d95ccf2076e6865866350e649e2fe955c8f"} Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.929930 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-qx9mv" Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.929852 4925 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx9mv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.930188 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qx9mv" podUID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.930231 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" event={"ID":"179a1459-7d61-4dc2-a510-6bae3cb4d24a","Type":"ContainerStarted","Data":"7d4ef02393015a2358a3cffabc4738e5854302bcd86bdce8bb9335bc3f3dbf9d"} Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.930341 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.930356 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" event={"ID":"179a1459-7d61-4dc2-a510-6bae3cb4d24a","Type":"ContainerStarted","Data":"483ba474a6715f9a0da0a132a57650695ebbbe69b8b3e2eece56d5878ba79d60"} Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.930258 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" podUID="179a1459-7d61-4dc2-a510-6bae3cb4d24a" containerName="controller-manager" containerID="cri-o://7d4ef02393015a2358a3cffabc4738e5854302bcd86bdce8bb9335bc3f3dbf9d" gracePeriod=30 Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.932569 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b9577233-8642-44a5-98f5-0538ee3f57cd","Type":"ContainerStarted","Data":"7421eecd0b148246f0f5debbaa91a489bb3215d820015bcbad0b25725d46937f"} Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.938770 4925 generic.go:334] "Generic (PLEG): container finished" podID="e623d6f6-1bf2-43f4-a280-147617dbf9ef" containerID="701751bde1c852f488d42123640b2dfda58d005c19140f23a8612c24d1153520" exitCode=0 Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.938836 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw" event={"ID":"e623d6f6-1bf2-43f4-a280-147617dbf9ef","Type":"ContainerDied","Data":"701751bde1c852f488d42123640b2dfda58d005c19140f23a8612c24d1153520"} Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.938966 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw" event={"ID":"e623d6f6-1bf2-43f4-a280-147617dbf9ef","Type":"ContainerStarted","Data":"aa66d8b88f0349daac35aba4280a772b614d3f82c5c0e1bdd014c6dcdb029187"} Feb 02 11:00:29 crc kubenswrapper[4925]: E0202 11:00:29.941831 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tp28w" podUID="c1044ab1-2d86-4f71-995a-5994d6b2262e" Feb 02 11:00:29 crc kubenswrapper[4925]: E0202 11:00:29.941920 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gmldm" podUID="b01fd158-f4e2-4ec8-953b-12dae9c49dd7" Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.968414 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" podStartSLOduration=11.968273228 podStartE2EDuration="11.968273228s" podCreationTimestamp="2026-02-02 11:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:29.966895631 +0000 UTC m=+206.971144593" watchObservedRunningTime="2026-02-02 11:00:29.968273228 +0000 UTC m=+206.972522190" Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.970868 4925 patch_prober.go:28] interesting pod/controller-manager-58754dddbf-qbns4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": read tcp 10.217.0.2:47018->10.217.0.56:8443: read: connection reset by peer" start-of-body= Feb 02 11:00:29 crc kubenswrapper[4925]: I0202 11:00:29.970915 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" podUID="179a1459-7d61-4dc2-a510-6bae3cb4d24a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": read tcp 10.217.0.2:47018->10.217.0.56:8443: read: connection reset by peer" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.021311 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" podStartSLOduration=32.021295266 podStartE2EDuration="32.021295266s" podCreationTimestamp="2026-02-02 10:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:30.015504552 +0000 UTC m=+207.019753504" watchObservedRunningTime="2026-02-02 11:00:30.021295266 +0000 UTC m=+207.025544228" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.287071 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.323891 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c4c76fdff-9268j"] Feb 02 11:00:30 crc kubenswrapper[4925]: E0202 11:00:30.324234 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179a1459-7d61-4dc2-a510-6bae3cb4d24a" containerName="controller-manager" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.324256 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="179a1459-7d61-4dc2-a510-6bae3cb4d24a" containerName="controller-manager" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.324389 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="179a1459-7d61-4dc2-a510-6bae3cb4d24a" containerName="controller-manager" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.324880 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.337776 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c4c76fdff-9268j"] Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.359813 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-client-ca\") pod \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.360111 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/179a1459-7d61-4dc2-a510-6bae3cb4d24a-serving-cert\") pod \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.360156 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-config\") pod \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.360228 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-proxy-ca-bundles\") pod \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.360262 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sls9j\" (UniqueName: \"kubernetes.io/projected/179a1459-7d61-4dc2-a510-6bae3cb4d24a-kube-api-access-sls9j\") pod \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\" (UID: \"179a1459-7d61-4dc2-a510-6bae3cb4d24a\") " Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.360725 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-client-ca" (OuterVolumeSpecName: "client-ca") pod "179a1459-7d61-4dc2-a510-6bae3cb4d24a" (UID: "179a1459-7d61-4dc2-a510-6bae3cb4d24a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.361344 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-config" (OuterVolumeSpecName: "config") pod "179a1459-7d61-4dc2-a510-6bae3cb4d24a" (UID: "179a1459-7d61-4dc2-a510-6bae3cb4d24a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.361900 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "179a1459-7d61-4dc2-a510-6bae3cb4d24a" (UID: "179a1459-7d61-4dc2-a510-6bae3cb4d24a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.365418 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/179a1459-7d61-4dc2-a510-6bae3cb4d24a-kube-api-access-sls9j" (OuterVolumeSpecName: "kube-api-access-sls9j") pod "179a1459-7d61-4dc2-a510-6bae3cb4d24a" (UID: "179a1459-7d61-4dc2-a510-6bae3cb4d24a"). InnerVolumeSpecName "kube-api-access-sls9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.365677 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/179a1459-7d61-4dc2-a510-6bae3cb4d24a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "179a1459-7d61-4dc2-a510-6bae3cb4d24a" (UID: "179a1459-7d61-4dc2-a510-6bae3cb4d24a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.462115 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0ab306c-9181-4e57-8224-2841c8d2effe-serving-cert\") pod \"controller-manager-5c4c76fdff-9268j\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.462448 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdt7x\" (UniqueName: \"kubernetes.io/projected/e0ab306c-9181-4e57-8224-2841c8d2effe-kube-api-access-qdt7x\") pod \"controller-manager-5c4c76fdff-9268j\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.462481 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-proxy-ca-bundles\") pod \"controller-manager-5c4c76fdff-9268j\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.462524 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-config\") pod \"controller-manager-5c4c76fdff-9268j\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.462544 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-client-ca\") pod \"controller-manager-5c4c76fdff-9268j\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.462588 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/179a1459-7d61-4dc2-a510-6bae3cb4d24a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.462602 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.462616 4925 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.462629 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sls9j\" (UniqueName: \"kubernetes.io/projected/179a1459-7d61-4dc2-a510-6bae3cb4d24a-kube-api-access-sls9j\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.462643 4925 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/179a1459-7d61-4dc2-a510-6bae3cb4d24a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.563871 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-proxy-ca-bundles\") pod \"controller-manager-5c4c76fdff-9268j\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.563927 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-config\") pod \"controller-manager-5c4c76fdff-9268j\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.563947 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-client-ca\") pod \"controller-manager-5c4c76fdff-9268j\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.564005 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0ab306c-9181-4e57-8224-2841c8d2effe-serving-cert\") pod \"controller-manager-5c4c76fdff-9268j\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.564026 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdt7x\" (UniqueName: \"kubernetes.io/projected/e0ab306c-9181-4e57-8224-2841c8d2effe-kube-api-access-qdt7x\") pod \"controller-manager-5c4c76fdff-9268j\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.565092 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-client-ca\") pod \"controller-manager-5c4c76fdff-9268j\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.565453 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-proxy-ca-bundles\") pod \"controller-manager-5c4c76fdff-9268j\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.565789 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-config\") pod \"controller-manager-5c4c76fdff-9268j\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.568688 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0ab306c-9181-4e57-8224-2841c8d2effe-serving-cert\") pod \"controller-manager-5c4c76fdff-9268j\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.578063 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdt7x\" (UniqueName: \"kubernetes.io/projected/e0ab306c-9181-4e57-8224-2841c8d2effe-kube-api-access-qdt7x\") pod \"controller-manager-5c4c76fdff-9268j\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.637153 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.894162 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c4c76fdff-9268j"] Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.944838 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b1472df2-041e-456c-b47a-fd15943af977","Type":"ContainerStarted","Data":"3b83a0b3240b0f742a4ba6f62b04b004da7d3d102bf70bd09194b5d92a39e6fa"} Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.947252 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xf5qm" event={"ID":"51968f99-bd7d-4958-bb6f-ba8035b2e637","Type":"ContainerStarted","Data":"b400fc8915bc867126cef5aca6f8d1dbf6fee7279269bcc3d6a6a6d09b9862e9"} Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.952453 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" event={"ID":"e0ab306c-9181-4e57-8224-2841c8d2effe","Type":"ContainerStarted","Data":"9a7036938e419f0fc300668ae1a4610b7624fd4ab7370f7aaffc03c2415f550e"} Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.953873 4925 generic.go:334] "Generic (PLEG): container finished" podID="179a1459-7d61-4dc2-a510-6bae3cb4d24a" containerID="7d4ef02393015a2358a3cffabc4738e5854302bcd86bdce8bb9335bc3f3dbf9d" exitCode=0 Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.953927 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" event={"ID":"179a1459-7d61-4dc2-a510-6bae3cb4d24a","Type":"ContainerDied","Data":"7d4ef02393015a2358a3cffabc4738e5854302bcd86bdce8bb9335bc3f3dbf9d"} Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.953945 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" event={"ID":"179a1459-7d61-4dc2-a510-6bae3cb4d24a","Type":"ContainerDied","Data":"483ba474a6715f9a0da0a132a57650695ebbbe69b8b3e2eece56d5878ba79d60"} Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.953960 4925 scope.go:117] "RemoveContainer" containerID="7d4ef02393015a2358a3cffabc4738e5854302bcd86bdce8bb9335bc3f3dbf9d" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.954088 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58754dddbf-qbns4" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.964504 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=6.964482661 podStartE2EDuration="6.964482661s" podCreationTimestamp="2026-02-02 11:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:30.959263873 +0000 UTC m=+207.963512845" watchObservedRunningTime="2026-02-02 11:00:30.964482661 +0000 UTC m=+207.968731623" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.967892 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" event={"ID":"39f183d5-0612-452e-b762-c841df3a306d","Type":"ContainerStarted","Data":"d7ce8c2deb42243ddbd7ac0c6178c93ab754c23a60acbb02a0527dacd633b57a"} Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.967933 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hjf4s" event={"ID":"39f183d5-0612-452e-b762-c841df3a306d","Type":"ContainerStarted","Data":"877ddbb66da6157ad4ad9fb4d982ac592068ccbd98718a94adb587aec6de94fc"} Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.977456 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xf5qm" podStartSLOduration=3.210426399 podStartE2EDuration="49.977440846s" podCreationTimestamp="2026-02-02 10:59:41 +0000 UTC" firstStartedPulling="2026-02-02 10:59:43.5818122 +0000 UTC m=+160.586061162" lastFinishedPulling="2026-02-02 11:00:30.348826617 +0000 UTC m=+207.353075609" observedRunningTime="2026-02-02 11:00:30.976305485 +0000 UTC m=+207.980554457" watchObservedRunningTime="2026-02-02 11:00:30.977440846 +0000 UTC m=+207.981689808" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.987341 4925 scope.go:117] "RemoveContainer" containerID="7d4ef02393015a2358a3cffabc4738e5854302bcd86bdce8bb9335bc3f3dbf9d" Feb 02 11:00:30 crc kubenswrapper[4925]: E0202 11:00:30.987659 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d4ef02393015a2358a3cffabc4738e5854302bcd86bdce8bb9335bc3f3dbf9d\": container with ID starting with 7d4ef02393015a2358a3cffabc4738e5854302bcd86bdce8bb9335bc3f3dbf9d not found: ID does not exist" containerID="7d4ef02393015a2358a3cffabc4738e5854302bcd86bdce8bb9335bc3f3dbf9d" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.987699 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d4ef02393015a2358a3cffabc4738e5854302bcd86bdce8bb9335bc3f3dbf9d"} err="failed to get container status \"7d4ef02393015a2358a3cffabc4738e5854302bcd86bdce8bb9335bc3f3dbf9d\": rpc error: code = NotFound desc = could not find container \"7d4ef02393015a2358a3cffabc4738e5854302bcd86bdce8bb9335bc3f3dbf9d\": container with ID starting with 7d4ef02393015a2358a3cffabc4738e5854302bcd86bdce8bb9335bc3f3dbf9d not found: ID does not exist" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.988193 4925 generic.go:334] "Generic (PLEG): container finished" podID="b9577233-8642-44a5-98f5-0538ee3f57cd" containerID="2bd020bb1198e07d52427c799c2521dea59ca7702fc4cb6e9d795a0ebbb0c531" exitCode=0 Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.988252 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b9577233-8642-44a5-98f5-0538ee3f57cd","Type":"ContainerDied","Data":"2bd020bb1198e07d52427c799c2521dea59ca7702fc4cb6e9d795a0ebbb0c531"} Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.989181 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.990200 4925 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx9mv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.990286 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qx9mv" podUID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 02 11:00:30 crc kubenswrapper[4925]: I0202 11:00:30.998505 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.010270 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58754dddbf-qbns4"] Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.015989 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58754dddbf-qbns4"] Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.039986 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hjf4s" podStartSLOduration=180.039969027 podStartE2EDuration="3m0.039969027s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:31.018311711 +0000 UTC m=+208.022560663" watchObservedRunningTime="2026-02-02 11:00:31.039969027 +0000 UTC m=+208.044217999" Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.228046 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw" Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.396272 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e623d6f6-1bf2-43f4-a280-147617dbf9ef-secret-volume\") pod \"e623d6f6-1bf2-43f4-a280-147617dbf9ef\" (UID: \"e623d6f6-1bf2-43f4-a280-147617dbf9ef\") " Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.396395 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e623d6f6-1bf2-43f4-a280-147617dbf9ef-config-volume\") pod \"e623d6f6-1bf2-43f4-a280-147617dbf9ef\" (UID: \"e623d6f6-1bf2-43f4-a280-147617dbf9ef\") " Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.396446 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6lzf\" (UniqueName: \"kubernetes.io/projected/e623d6f6-1bf2-43f4-a280-147617dbf9ef-kube-api-access-h6lzf\") pod \"e623d6f6-1bf2-43f4-a280-147617dbf9ef\" (UID: \"e623d6f6-1bf2-43f4-a280-147617dbf9ef\") " Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.397145 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e623d6f6-1bf2-43f4-a280-147617dbf9ef-config-volume" (OuterVolumeSpecName: "config-volume") pod "e623d6f6-1bf2-43f4-a280-147617dbf9ef" (UID: "e623d6f6-1bf2-43f4-a280-147617dbf9ef"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.401544 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e623d6f6-1bf2-43f4-a280-147617dbf9ef-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e623d6f6-1bf2-43f4-a280-147617dbf9ef" (UID: "e623d6f6-1bf2-43f4-a280-147617dbf9ef"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.402228 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e623d6f6-1bf2-43f4-a280-147617dbf9ef-kube-api-access-h6lzf" (OuterVolumeSpecName: "kube-api-access-h6lzf") pod "e623d6f6-1bf2-43f4-a280-147617dbf9ef" (UID: "e623d6f6-1bf2-43f4-a280-147617dbf9ef"). InnerVolumeSpecName "kube-api-access-h6lzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.498406 4925 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e623d6f6-1bf2-43f4-a280-147617dbf9ef-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.498449 4925 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e623d6f6-1bf2-43f4-a280-147617dbf9ef-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.498462 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6lzf\" (UniqueName: \"kubernetes.io/projected/e623d6f6-1bf2-43f4-a280-147617dbf9ef-kube-api-access-h6lzf\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.719314 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xf5qm" Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.719507 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xf5qm" Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.996498 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw" Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.996490 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw" event={"ID":"e623d6f6-1bf2-43f4-a280-147617dbf9ef","Type":"ContainerDied","Data":"aa66d8b88f0349daac35aba4280a772b614d3f82c5c0e1bdd014c6dcdb029187"} Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.999564 4925 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx9mv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 02 11:00:31 crc kubenswrapper[4925]: I0202 11:00:31.999627 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qx9mv" podUID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:31.997150 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa66d8b88f0349daac35aba4280a772b614d3f82c5c0e1bdd014c6dcdb029187" Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:32.000550 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:32.000602 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" event={"ID":"e0ab306c-9181-4e57-8224-2841c8d2effe","Type":"ContainerStarted","Data":"342838bccbed115e54fdc7dbdfb5d95cb32cb4d84a555843f3bbe4417b736c41"} Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:32.005155 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:32.016868 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" podStartSLOduration=14.016855757 podStartE2EDuration="14.016855757s" podCreationTimestamp="2026-02-02 11:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:32.014388881 +0000 UTC m=+209.018637843" watchObservedRunningTime="2026-02-02 11:00:32.016855757 +0000 UTC m=+209.021104719" Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:32.273281 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:32.409182 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9577233-8642-44a5-98f5-0538ee3f57cd-kubelet-dir\") pod \"b9577233-8642-44a5-98f5-0538ee3f57cd\" (UID: \"b9577233-8642-44a5-98f5-0538ee3f57cd\") " Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:32.409243 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9577233-8642-44a5-98f5-0538ee3f57cd-kube-api-access\") pod \"b9577233-8642-44a5-98f5-0538ee3f57cd\" (UID: \"b9577233-8642-44a5-98f5-0538ee3f57cd\") " Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:32.409836 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9577233-8642-44a5-98f5-0538ee3f57cd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b9577233-8642-44a5-98f5-0538ee3f57cd" (UID: "b9577233-8642-44a5-98f5-0538ee3f57cd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:32.414474 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9577233-8642-44a5-98f5-0538ee3f57cd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b9577233-8642-44a5-98f5-0538ee3f57cd" (UID: "b9577233-8642-44a5-98f5-0538ee3f57cd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:32.510908 4925 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9577233-8642-44a5-98f5-0538ee3f57cd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:32.511125 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9577233-8642-44a5-98f5-0538ee3f57cd-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:32.639577 4925 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx9mv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:32.639909 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-qx9mv" podUID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:32.639589 4925 patch_prober.go:28] interesting pod/downloads-7954f5f757-qx9mv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:32.640228 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qx9mv" podUID="2c1d6c8a-41c7-48a0-853c-d1df60efb422" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:32.671797 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="179a1459-7d61-4dc2-a510-6bae3cb4d24a" path="/var/lib/kubelet/pods/179a1459-7d61-4dc2-a510-6bae3cb4d24a/volumes" Feb 02 11:00:32 crc kubenswrapper[4925]: I0202 11:00:32.965454 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xf5qm" podUID="51968f99-bd7d-4958-bb6f-ba8035b2e637" containerName="registry-server" probeResult="failure" output=< Feb 02 11:00:32 crc kubenswrapper[4925]: timeout: failed to connect service ":50051" within 1s Feb 02 11:00:32 crc kubenswrapper[4925]: > Feb 02 11:00:33 crc kubenswrapper[4925]: I0202 11:00:33.005461 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcc4h" event={"ID":"0f7aa95c-3861-48ab-a30f-0301aad169d7","Type":"ContainerStarted","Data":"3f7259dfd4e49bbdfd5394ffc09a348322a3cc96788c3f0d56713eece6862da9"} Feb 02 11:00:33 crc kubenswrapper[4925]: I0202 11:00:33.007136 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 11:00:33 crc kubenswrapper[4925]: I0202 11:00:33.007841 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b9577233-8642-44a5-98f5-0538ee3f57cd","Type":"ContainerDied","Data":"7421eecd0b148246f0f5debbaa91a489bb3215d820015bcbad0b25725d46937f"} Feb 02 11:00:33 crc kubenswrapper[4925]: I0202 11:00:33.007882 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7421eecd0b148246f0f5debbaa91a489bb3215d820015bcbad0b25725d46937f" Feb 02 11:00:34 crc kubenswrapper[4925]: I0202 11:00:34.014611 4925 generic.go:334] "Generic (PLEG): container finished" podID="0f7aa95c-3861-48ab-a30f-0301aad169d7" containerID="3f7259dfd4e49bbdfd5394ffc09a348322a3cc96788c3f0d56713eece6862da9" exitCode=0 Feb 02 11:00:34 crc kubenswrapper[4925]: I0202 11:00:34.015528 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcc4h" event={"ID":"0f7aa95c-3861-48ab-a30f-0301aad169d7","Type":"ContainerDied","Data":"3f7259dfd4e49bbdfd5394ffc09a348322a3cc96788c3f0d56713eece6862da9"} Feb 02 11:00:41 crc kubenswrapper[4925]: I0202 11:00:41.092156 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcc4h" event={"ID":"0f7aa95c-3861-48ab-a30f-0301aad169d7","Type":"ContainerStarted","Data":"2be20294d9fa6bfab7e85cd7ffe04f972c8c7637fa4d7064e8757ee89f751818"} Feb 02 11:00:41 crc kubenswrapper[4925]: I0202 11:00:41.094955 4925 generic.go:334] "Generic (PLEG): container finished" podID="bf7432d2-d4a9-4fa9-8570-e76d21c8e771" containerID="eec89e083e1e92a05e30abb3347d1e3ac54c725682f7de1c1b8940bd75f14e11" exitCode=0 Feb 02 11:00:41 crc kubenswrapper[4925]: I0202 11:00:41.095024 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qncpv" event={"ID":"bf7432d2-d4a9-4fa9-8570-e76d21c8e771","Type":"ContainerDied","Data":"eec89e083e1e92a05e30abb3347d1e3ac54c725682f7de1c1b8940bd75f14e11"} Feb 02 11:00:41 crc kubenswrapper[4925]: I0202 11:00:41.112529 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gcc4h" podStartSLOduration=3.871052938 podStartE2EDuration="1m0.112506133s" podCreationTimestamp="2026-02-02 10:59:41 +0000 UTC" firstStartedPulling="2026-02-02 10:59:43.581840041 +0000 UTC m=+160.586089013" lastFinishedPulling="2026-02-02 11:00:39.823293176 +0000 UTC m=+216.827542208" observedRunningTime="2026-02-02 11:00:41.112368499 +0000 UTC m=+218.116617481" watchObservedRunningTime="2026-02-02 11:00:41.112506133 +0000 UTC m=+218.116755105" Feb 02 11:00:41 crc kubenswrapper[4925]: I0202 11:00:41.944921 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gcc4h" Feb 02 11:00:41 crc kubenswrapper[4925]: I0202 11:00:41.945466 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gcc4h" Feb 02 11:00:42 crc kubenswrapper[4925]: I0202 11:00:42.085665 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xf5qm" Feb 02 11:00:42 crc kubenswrapper[4925]: I0202 11:00:42.085941 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gcc4h" Feb 02 11:00:42 crc kubenswrapper[4925]: I0202 11:00:42.126250 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xf5qm" Feb 02 11:00:42 crc kubenswrapper[4925]: I0202 11:00:42.660393 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-qx9mv" Feb 02 11:00:43 crc kubenswrapper[4925]: I0202 11:00:43.399203 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:00:43 crc kubenswrapper[4925]: I0202 11:00:43.400960 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:00:43 crc kubenswrapper[4925]: I0202 11:00:43.401226 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 11:00:43 crc kubenswrapper[4925]: I0202 11:00:43.402084 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:00:43 crc kubenswrapper[4925]: I0202 11:00:43.402404 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48" gracePeriod=600 Feb 02 11:00:46 crc kubenswrapper[4925]: I0202 11:00:46.130821 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48" exitCode=0 Feb 02 11:00:46 crc kubenswrapper[4925]: I0202 11:00:46.130881 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48"} Feb 02 11:00:52 crc kubenswrapper[4925]: I0202 11:00:52.011012 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gcc4h" Feb 02 11:00:54 crc kubenswrapper[4925]: I0202 11:00:54.185673 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26nhd" event={"ID":"560d449d-bbfb-4f5b-a14f-4a26175a20d2","Type":"ContainerStarted","Data":"a97cd00301c5057dac8cf80ba330a25e589377010fe8efe602b76addaa7808fb"} Feb 02 11:00:54 crc kubenswrapper[4925]: I0202 11:00:54.187845 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gmldm" event={"ID":"b01fd158-f4e2-4ec8-953b-12dae9c49dd7","Type":"ContainerStarted","Data":"a2c5cd00b90d42f2084e1e80d4ac37c40974b301a0275f764a6ff8f01e375570"} Feb 02 11:00:54 crc kubenswrapper[4925]: I0202 11:00:54.190267 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp28w" event={"ID":"c1044ab1-2d86-4f71-995a-5994d6b2262e","Type":"ContainerStarted","Data":"a6e75bff94bb589ccae37c280b06ccfc119dc2bf790d196adcb027f92b384e08"} Feb 02 11:00:54 crc kubenswrapper[4925]: I0202 11:00:54.192472 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5f7d" event={"ID":"c985f150-ec7d-4175-99a1-8fb775b7d7d9","Type":"ContainerStarted","Data":"acca7a6ecd455f83f22c003267dbabef1b98c06c702784748b8f4c6430438a1c"} Feb 02 11:00:54 crc kubenswrapper[4925]: I0202 11:00:54.194315 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn6rl" event={"ID":"46c2cde5-148b-44c2-821a-a470122f1167","Type":"ContainerStarted","Data":"6d67ba0cf863a597ab7afaa107149b527316b9f4982bbba87266a276e4d4b84d"} Feb 02 11:00:54 crc kubenswrapper[4925]: I0202 11:00:54.196627 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qncpv" event={"ID":"bf7432d2-d4a9-4fa9-8570-e76d21c8e771","Type":"ContainerStarted","Data":"d5c84b55b6d88fcb03e9f67616e5b2ec110c00b0ad4174395bd831732ea7c920"} Feb 02 11:00:54 crc kubenswrapper[4925]: I0202 11:00:54.198847 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"44bb0f6e97b3094fc1dd166bb55ea42a68acba564a9b407084dca96d96dbdd51"} Feb 02 11:00:54 crc kubenswrapper[4925]: I0202 11:00:54.248591 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qncpv" podStartSLOduration=3.067749059 podStartE2EDuration="1m13.248570699s" podCreationTimestamp="2026-02-02 10:59:41 +0000 UTC" firstStartedPulling="2026-02-02 10:59:43.576637394 +0000 UTC m=+160.580886366" lastFinishedPulling="2026-02-02 11:00:53.757458994 +0000 UTC m=+230.761708006" observedRunningTime="2026-02-02 11:00:54.245691413 +0000 UTC m=+231.249940375" watchObservedRunningTime="2026-02-02 11:00:54.248570699 +0000 UTC m=+231.252819661" Feb 02 11:00:55 crc kubenswrapper[4925]: I0202 11:00:55.207396 4925 generic.go:334] "Generic (PLEG): container finished" podID="c1044ab1-2d86-4f71-995a-5994d6b2262e" containerID="a6e75bff94bb589ccae37c280b06ccfc119dc2bf790d196adcb027f92b384e08" exitCode=0 Feb 02 11:00:55 crc kubenswrapper[4925]: I0202 11:00:55.207498 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp28w" event={"ID":"c1044ab1-2d86-4f71-995a-5994d6b2262e","Type":"ContainerDied","Data":"a6e75bff94bb589ccae37c280b06ccfc119dc2bf790d196adcb027f92b384e08"} Feb 02 11:00:55 crc kubenswrapper[4925]: I0202 11:00:55.213119 4925 generic.go:334] "Generic (PLEG): container finished" podID="b01fd158-f4e2-4ec8-953b-12dae9c49dd7" containerID="a2c5cd00b90d42f2084e1e80d4ac37c40974b301a0275f764a6ff8f01e375570" exitCode=0 Feb 02 11:00:55 crc kubenswrapper[4925]: I0202 11:00:55.213197 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gmldm" event={"ID":"b01fd158-f4e2-4ec8-953b-12dae9c49dd7","Type":"ContainerDied","Data":"a2c5cd00b90d42f2084e1e80d4ac37c40974b301a0275f764a6ff8f01e375570"} Feb 02 11:00:55 crc kubenswrapper[4925]: I0202 11:00:55.216422 4925 generic.go:334] "Generic (PLEG): container finished" podID="c985f150-ec7d-4175-99a1-8fb775b7d7d9" containerID="acca7a6ecd455f83f22c003267dbabef1b98c06c702784748b8f4c6430438a1c" exitCode=0 Feb 02 11:00:55 crc kubenswrapper[4925]: I0202 11:00:55.216497 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5f7d" event={"ID":"c985f150-ec7d-4175-99a1-8fb775b7d7d9","Type":"ContainerDied","Data":"acca7a6ecd455f83f22c003267dbabef1b98c06c702784748b8f4c6430438a1c"} Feb 02 11:00:55 crc kubenswrapper[4925]: I0202 11:00:55.220731 4925 generic.go:334] "Generic (PLEG): container finished" podID="46c2cde5-148b-44c2-821a-a470122f1167" containerID="6d67ba0cf863a597ab7afaa107149b527316b9f4982bbba87266a276e4d4b84d" exitCode=0 Feb 02 11:00:55 crc kubenswrapper[4925]: I0202 11:00:55.220870 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn6rl" event={"ID":"46c2cde5-148b-44c2-821a-a470122f1167","Type":"ContainerDied","Data":"6d67ba0cf863a597ab7afaa107149b527316b9f4982bbba87266a276e4d4b84d"} Feb 02 11:00:55 crc kubenswrapper[4925]: I0202 11:00:55.261509 4925 generic.go:334] "Generic (PLEG): container finished" podID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" containerID="a97cd00301c5057dac8cf80ba330a25e589377010fe8efe602b76addaa7808fb" exitCode=0 Feb 02 11:00:55 crc kubenswrapper[4925]: I0202 11:00:55.261776 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26nhd" event={"ID":"560d449d-bbfb-4f5b-a14f-4a26175a20d2","Type":"ContainerDied","Data":"a97cd00301c5057dac8cf80ba330a25e589377010fe8efe602b76addaa7808fb"} Feb 02 11:00:56 crc kubenswrapper[4925]: I0202 11:00:56.269172 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26nhd" event={"ID":"560d449d-bbfb-4f5b-a14f-4a26175a20d2","Type":"ContainerStarted","Data":"aae3985f1f8a03c25624048757109db5045ec7191fcb8c629b62768b9f5f4350"} Feb 02 11:00:56 crc kubenswrapper[4925]: I0202 11:00:56.271956 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gmldm" event={"ID":"b01fd158-f4e2-4ec8-953b-12dae9c49dd7","Type":"ContainerStarted","Data":"cd4ce0781ba4db9241633b3a8fba846243cb389de8339e0a2bf348e9b9a12a51"} Feb 02 11:00:56 crc kubenswrapper[4925]: I0202 11:00:56.274264 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp28w" event={"ID":"c1044ab1-2d86-4f71-995a-5994d6b2262e","Type":"ContainerStarted","Data":"840aeb75c267e7c780488105b93e3092e9815c9681182b3fb8b29d6776bf3f43"} Feb 02 11:00:56 crc kubenswrapper[4925]: I0202 11:00:56.275767 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5f7d" event={"ID":"c985f150-ec7d-4175-99a1-8fb775b7d7d9","Type":"ContainerStarted","Data":"377df476e9567222ae4dcfcf4311f03b6963952208edb4c6b264d25f405679eb"} Feb 02 11:00:56 crc kubenswrapper[4925]: I0202 11:00:56.277275 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn6rl" event={"ID":"46c2cde5-148b-44c2-821a-a470122f1167","Type":"ContainerStarted","Data":"79f6453d4e8beea21eb88d3da425f486aff25df011f39ae2e8fabeff26898183"} Feb 02 11:00:56 crc kubenswrapper[4925]: I0202 11:00:56.291005 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-26nhd" podStartSLOduration=3.130298777 podStartE2EDuration="1m13.290985924s" podCreationTimestamp="2026-02-02 10:59:43 +0000 UTC" firstStartedPulling="2026-02-02 10:59:45.629771301 +0000 UTC m=+162.634020263" lastFinishedPulling="2026-02-02 11:00:55.790458448 +0000 UTC m=+232.794707410" observedRunningTime="2026-02-02 11:00:56.290755248 +0000 UTC m=+233.295004220" watchObservedRunningTime="2026-02-02 11:00:56.290985924 +0000 UTC m=+233.295234886" Feb 02 11:00:56 crc kubenswrapper[4925]: I0202 11:00:56.324939 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tp28w" podStartSLOduration=3.266724927 podStartE2EDuration="1m12.324888365s" podCreationTimestamp="2026-02-02 10:59:44 +0000 UTC" firstStartedPulling="2026-02-02 10:59:46.639868735 +0000 UTC m=+163.644117697" lastFinishedPulling="2026-02-02 11:00:55.698032173 +0000 UTC m=+232.702281135" observedRunningTime="2026-02-02 11:00:56.308420678 +0000 UTC m=+233.312669640" watchObservedRunningTime="2026-02-02 11:00:56.324888365 +0000 UTC m=+233.329137327" Feb 02 11:00:56 crc kubenswrapper[4925]: I0202 11:00:56.341742 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gmldm" podStartSLOduration=3.33726206 podStartE2EDuration="1m12.341726362s" podCreationTimestamp="2026-02-02 10:59:44 +0000 UTC" firstStartedPulling="2026-02-02 10:59:46.637913753 +0000 UTC m=+163.642162715" lastFinishedPulling="2026-02-02 11:00:55.642378055 +0000 UTC m=+232.646627017" observedRunningTime="2026-02-02 11:00:56.328748897 +0000 UTC m=+233.332997879" watchObservedRunningTime="2026-02-02 11:00:56.341726362 +0000 UTC m=+233.345975324" Feb 02 11:00:56 crc kubenswrapper[4925]: I0202 11:00:56.361263 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d5f7d" podStartSLOduration=3.07245362 podStartE2EDuration="1m13.361244471s" podCreationTimestamp="2026-02-02 10:59:43 +0000 UTC" firstStartedPulling="2026-02-02 10:59:45.636807777 +0000 UTC m=+162.641056739" lastFinishedPulling="2026-02-02 11:00:55.925598628 +0000 UTC m=+232.929847590" observedRunningTime="2026-02-02 11:00:56.342897683 +0000 UTC m=+233.347146645" watchObservedRunningTime="2026-02-02 11:00:56.361244471 +0000 UTC m=+233.365493433" Feb 02 11:00:56 crc kubenswrapper[4925]: I0202 11:00:56.362141 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pn6rl" podStartSLOduration=4.194979329 podStartE2EDuration="1m15.362136594s" podCreationTimestamp="2026-02-02 10:59:41 +0000 UTC" firstStartedPulling="2026-02-02 10:59:44.617869889 +0000 UTC m=+161.622118841" lastFinishedPulling="2026-02-02 11:00:55.785027144 +0000 UTC m=+232.789276106" observedRunningTime="2026-02-02 11:00:56.359375091 +0000 UTC m=+233.363624053" watchObservedRunningTime="2026-02-02 11:00:56.362136594 +0000 UTC m=+233.366385556" Feb 02 11:01:02 crc kubenswrapper[4925]: I0202 11:01:02.159229 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pn6rl" Feb 02 11:01:02 crc kubenswrapper[4925]: I0202 11:01:02.159795 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pn6rl" Feb 02 11:01:02 crc kubenswrapper[4925]: I0202 11:01:02.196259 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pn6rl" Feb 02 11:01:02 crc kubenswrapper[4925]: I0202 11:01:02.340958 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qncpv" Feb 02 11:01:02 crc kubenswrapper[4925]: I0202 11:01:02.340995 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qncpv" Feb 02 11:01:02 crc kubenswrapper[4925]: I0202 11:01:02.357137 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pn6rl" Feb 02 11:01:02 crc kubenswrapper[4925]: I0202 11:01:02.388963 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qncpv" Feb 02 11:01:02 crc kubenswrapper[4925]: I0202 11:01:02.652332 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7gsrw"] Feb 02 11:01:03 crc kubenswrapper[4925]: I0202 11:01:03.359095 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qncpv" Feb 02 11:01:03 crc kubenswrapper[4925]: I0202 11:01:03.927312 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d5f7d" Feb 02 11:01:03 crc kubenswrapper[4925]: I0202 11:01:03.927401 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d5f7d" Feb 02 11:01:03 crc kubenswrapper[4925]: I0202 11:01:03.976126 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d5f7d" Feb 02 11:01:04 crc kubenswrapper[4925]: I0202 11:01:04.318535 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-26nhd" Feb 02 11:01:04 crc kubenswrapper[4925]: I0202 11:01:04.319337 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-26nhd" Feb 02 11:01:04 crc kubenswrapper[4925]: I0202 11:01:04.357195 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-26nhd" Feb 02 11:01:04 crc kubenswrapper[4925]: I0202 11:01:04.358023 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d5f7d" Feb 02 11:01:04 crc kubenswrapper[4925]: I0202 11:01:04.481618 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pn6rl"] Feb 02 11:01:04 crc kubenswrapper[4925]: I0202 11:01:04.481809 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pn6rl" podUID="46c2cde5-148b-44c2-821a-a470122f1167" containerName="registry-server" containerID="cri-o://79f6453d4e8beea21eb88d3da425f486aff25df011f39ae2e8fabeff26898183" gracePeriod=2 Feb 02 11:01:04 crc kubenswrapper[4925]: I0202 11:01:04.706411 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qncpv"] Feb 02 11:01:04 crc kubenswrapper[4925]: I0202 11:01:04.956067 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gmldm" Feb 02 11:01:04 crc kubenswrapper[4925]: I0202 11:01:04.956778 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gmldm" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.012481 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gmldm" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.078933 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn6rl" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.211139 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6tcs\" (UniqueName: \"kubernetes.io/projected/46c2cde5-148b-44c2-821a-a470122f1167-kube-api-access-d6tcs\") pod \"46c2cde5-148b-44c2-821a-a470122f1167\" (UID: \"46c2cde5-148b-44c2-821a-a470122f1167\") " Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.211324 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c2cde5-148b-44c2-821a-a470122f1167-catalog-content\") pod \"46c2cde5-148b-44c2-821a-a470122f1167\" (UID: \"46c2cde5-148b-44c2-821a-a470122f1167\") " Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.211458 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c2cde5-148b-44c2-821a-a470122f1167-utilities\") pod \"46c2cde5-148b-44c2-821a-a470122f1167\" (UID: \"46c2cde5-148b-44c2-821a-a470122f1167\") " Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.212709 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46c2cde5-148b-44c2-821a-a470122f1167-utilities" (OuterVolumeSpecName: "utilities") pod "46c2cde5-148b-44c2-821a-a470122f1167" (UID: "46c2cde5-148b-44c2-821a-a470122f1167"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.217497 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46c2cde5-148b-44c2-821a-a470122f1167-kube-api-access-d6tcs" (OuterVolumeSpecName: "kube-api-access-d6tcs") pod "46c2cde5-148b-44c2-821a-a470122f1167" (UID: "46c2cde5-148b-44c2-821a-a470122f1167"). InnerVolumeSpecName "kube-api-access-d6tcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.281988 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46c2cde5-148b-44c2-821a-a470122f1167-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46c2cde5-148b-44c2-821a-a470122f1167" (UID: "46c2cde5-148b-44c2-821a-a470122f1167"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.313414 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c2cde5-148b-44c2-821a-a470122f1167-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.313552 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6tcs\" (UniqueName: \"kubernetes.io/projected/46c2cde5-148b-44c2-821a-a470122f1167-kube-api-access-d6tcs\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.313587 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c2cde5-148b-44c2-821a-a470122f1167-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.332178 4925 generic.go:334] "Generic (PLEG): container finished" podID="46c2cde5-148b-44c2-821a-a470122f1167" containerID="79f6453d4e8beea21eb88d3da425f486aff25df011f39ae2e8fabeff26898183" exitCode=0 Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.332260 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn6rl" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.332295 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn6rl" event={"ID":"46c2cde5-148b-44c2-821a-a470122f1167","Type":"ContainerDied","Data":"79f6453d4e8beea21eb88d3da425f486aff25df011f39ae2e8fabeff26898183"} Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.332342 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn6rl" event={"ID":"46c2cde5-148b-44c2-821a-a470122f1167","Type":"ContainerDied","Data":"1619d56e1c64057d9623b12a4ff681292d93abcae4ed6c7a528d8f31df72f9aa"} Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.332378 4925 scope.go:117] "RemoveContainer" containerID="79f6453d4e8beea21eb88d3da425f486aff25df011f39ae2e8fabeff26898183" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.333994 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qncpv" podUID="bf7432d2-d4a9-4fa9-8570-e76d21c8e771" containerName="registry-server" containerID="cri-o://d5c84b55b6d88fcb03e9f67616e5b2ec110c00b0ad4174395bd831732ea7c920" gracePeriod=2 Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.348928 4925 scope.go:117] "RemoveContainer" containerID="6d67ba0cf863a597ab7afaa107149b527316b9f4982bbba87266a276e4d4b84d" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.374899 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pn6rl"] Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.383776 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gmldm" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.389916 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pn6rl"] Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.392732 4925 scope.go:117] "RemoveContainer" containerID="be5f384664ea0d94d1e1b10b0911c8814c68724ef0d7d7d75d393120f4a34395" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.399387 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-26nhd" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.424460 4925 scope.go:117] "RemoveContainer" containerID="79f6453d4e8beea21eb88d3da425f486aff25df011f39ae2e8fabeff26898183" Feb 02 11:01:05 crc kubenswrapper[4925]: E0202 11:01:05.429414 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79f6453d4e8beea21eb88d3da425f486aff25df011f39ae2e8fabeff26898183\": container with ID starting with 79f6453d4e8beea21eb88d3da425f486aff25df011f39ae2e8fabeff26898183 not found: ID does not exist" containerID="79f6453d4e8beea21eb88d3da425f486aff25df011f39ae2e8fabeff26898183" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.429470 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f6453d4e8beea21eb88d3da425f486aff25df011f39ae2e8fabeff26898183"} err="failed to get container status \"79f6453d4e8beea21eb88d3da425f486aff25df011f39ae2e8fabeff26898183\": rpc error: code = NotFound desc = could not find container \"79f6453d4e8beea21eb88d3da425f486aff25df011f39ae2e8fabeff26898183\": container with ID starting with 79f6453d4e8beea21eb88d3da425f486aff25df011f39ae2e8fabeff26898183 not found: ID does not exist" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.429507 4925 scope.go:117] "RemoveContainer" containerID="6d67ba0cf863a597ab7afaa107149b527316b9f4982bbba87266a276e4d4b84d" Feb 02 11:01:05 crc kubenswrapper[4925]: E0202 11:01:05.429801 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d67ba0cf863a597ab7afaa107149b527316b9f4982bbba87266a276e4d4b84d\": container with ID starting with 6d67ba0cf863a597ab7afaa107149b527316b9f4982bbba87266a276e4d4b84d not found: ID does not exist" containerID="6d67ba0cf863a597ab7afaa107149b527316b9f4982bbba87266a276e4d4b84d" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.429831 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d67ba0cf863a597ab7afaa107149b527316b9f4982bbba87266a276e4d4b84d"} err="failed to get container status \"6d67ba0cf863a597ab7afaa107149b527316b9f4982bbba87266a276e4d4b84d\": rpc error: code = NotFound desc = could not find container \"6d67ba0cf863a597ab7afaa107149b527316b9f4982bbba87266a276e4d4b84d\": container with ID starting with 6d67ba0cf863a597ab7afaa107149b527316b9f4982bbba87266a276e4d4b84d not found: ID does not exist" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.429850 4925 scope.go:117] "RemoveContainer" containerID="be5f384664ea0d94d1e1b10b0911c8814c68724ef0d7d7d75d393120f4a34395" Feb 02 11:01:05 crc kubenswrapper[4925]: E0202 11:01:05.430185 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be5f384664ea0d94d1e1b10b0911c8814c68724ef0d7d7d75d393120f4a34395\": container with ID starting with be5f384664ea0d94d1e1b10b0911c8814c68724ef0d7d7d75d393120f4a34395 not found: ID does not exist" containerID="be5f384664ea0d94d1e1b10b0911c8814c68724ef0d7d7d75d393120f4a34395" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.430232 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be5f384664ea0d94d1e1b10b0911c8814c68724ef0d7d7d75d393120f4a34395"} err="failed to get container status \"be5f384664ea0d94d1e1b10b0911c8814c68724ef0d7d7d75d393120f4a34395\": rpc error: code = NotFound desc = could not find container \"be5f384664ea0d94d1e1b10b0911c8814c68724ef0d7d7d75d393120f4a34395\": container with ID starting with be5f384664ea0d94d1e1b10b0911c8814c68724ef0d7d7d75d393120f4a34395 not found: ID does not exist" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.618914 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tp28w" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.618984 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tp28w" Feb 02 11:01:05 crc kubenswrapper[4925]: I0202 11:01:05.655382 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tp28w" Feb 02 11:01:06 crc kubenswrapper[4925]: I0202 11:01:06.343145 4925 generic.go:334] "Generic (PLEG): container finished" podID="bf7432d2-d4a9-4fa9-8570-e76d21c8e771" containerID="d5c84b55b6d88fcb03e9f67616e5b2ec110c00b0ad4174395bd831732ea7c920" exitCode=0 Feb 02 11:01:06 crc kubenswrapper[4925]: I0202 11:01:06.343186 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qncpv" event={"ID":"bf7432d2-d4a9-4fa9-8570-e76d21c8e771","Type":"ContainerDied","Data":"d5c84b55b6d88fcb03e9f67616e5b2ec110c00b0ad4174395bd831732ea7c920"} Feb 02 11:01:06 crc kubenswrapper[4925]: I0202 11:01:06.380537 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tp28w" Feb 02 11:01:06 crc kubenswrapper[4925]: I0202 11:01:06.390931 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qncpv" Feb 02 11:01:06 crc kubenswrapper[4925]: I0202 11:01:06.556529 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56pv6\" (UniqueName: \"kubernetes.io/projected/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-kube-api-access-56pv6\") pod \"bf7432d2-d4a9-4fa9-8570-e76d21c8e771\" (UID: \"bf7432d2-d4a9-4fa9-8570-e76d21c8e771\") " Feb 02 11:01:06 crc kubenswrapper[4925]: I0202 11:01:06.556741 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-catalog-content\") pod \"bf7432d2-d4a9-4fa9-8570-e76d21c8e771\" (UID: \"bf7432d2-d4a9-4fa9-8570-e76d21c8e771\") " Feb 02 11:01:06 crc kubenswrapper[4925]: I0202 11:01:06.556795 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-utilities\") pod \"bf7432d2-d4a9-4fa9-8570-e76d21c8e771\" (UID: \"bf7432d2-d4a9-4fa9-8570-e76d21c8e771\") " Feb 02 11:01:06 crc kubenswrapper[4925]: I0202 11:01:06.557541 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-utilities" (OuterVolumeSpecName: "utilities") pod "bf7432d2-d4a9-4fa9-8570-e76d21c8e771" (UID: "bf7432d2-d4a9-4fa9-8570-e76d21c8e771"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:01:06 crc kubenswrapper[4925]: I0202 11:01:06.562846 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-kube-api-access-56pv6" (OuterVolumeSpecName: "kube-api-access-56pv6") pod "bf7432d2-d4a9-4fa9-8570-e76d21c8e771" (UID: "bf7432d2-d4a9-4fa9-8570-e76d21c8e771"). InnerVolumeSpecName "kube-api-access-56pv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:06 crc kubenswrapper[4925]: I0202 11:01:06.607961 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf7432d2-d4a9-4fa9-8570-e76d21c8e771" (UID: "bf7432d2-d4a9-4fa9-8570-e76d21c8e771"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:01:06 crc kubenswrapper[4925]: I0202 11:01:06.658917 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:06 crc kubenswrapper[4925]: I0202 11:01:06.658955 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:06 crc kubenswrapper[4925]: I0202 11:01:06.659031 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56pv6\" (UniqueName: \"kubernetes.io/projected/bf7432d2-d4a9-4fa9-8570-e76d21c8e771-kube-api-access-56pv6\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:06 crc kubenswrapper[4925]: I0202 11:01:06.679628 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46c2cde5-148b-44c2-821a-a470122f1167" path="/var/lib/kubelet/pods/46c2cde5-148b-44c2-821a-a470122f1167/volumes" Feb 02 11:01:06 crc kubenswrapper[4925]: I0202 11:01:06.882790 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-26nhd"] Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.354663 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-26nhd" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" containerName="registry-server" containerID="cri-o://aae3985f1f8a03c25624048757109db5045ec7191fcb8c629b62768b9f5f4350" gracePeriod=2 Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.355105 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qncpv" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.355753 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qncpv" event={"ID":"bf7432d2-d4a9-4fa9-8570-e76d21c8e771","Type":"ContainerDied","Data":"cccc4058476dd1dc21596f9f6014bbf7f6fd4e2b501f6866f77252f4351ffea0"} Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.355795 4925 scope.go:117] "RemoveContainer" containerID="d5c84b55b6d88fcb03e9f67616e5b2ec110c00b0ad4174395bd831732ea7c920" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.377236 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qncpv"] Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.380026 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qncpv"] Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.426653 4925 scope.go:117] "RemoveContainer" containerID="eec89e083e1e92a05e30abb3347d1e3ac54c725682f7de1c1b8940bd75f14e11" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.446224 4925 scope.go:117] "RemoveContainer" containerID="46e1632faa8a0f835647fa9d3cb39091d0e0c50a6e890197868aa5865956ce8d" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.556871 4925 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 11:01:07 crc kubenswrapper[4925]: E0202 11:01:07.557186 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9577233-8642-44a5-98f5-0538ee3f57cd" containerName="pruner" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.557210 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9577233-8642-44a5-98f5-0538ee3f57cd" containerName="pruner" Feb 02 11:01:07 crc kubenswrapper[4925]: E0202 11:01:07.557222 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c2cde5-148b-44c2-821a-a470122f1167" containerName="registry-server" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.557230 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c2cde5-148b-44c2-821a-a470122f1167" containerName="registry-server" Feb 02 11:01:07 crc kubenswrapper[4925]: E0202 11:01:07.557242 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf7432d2-d4a9-4fa9-8570-e76d21c8e771" containerName="registry-server" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.557249 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf7432d2-d4a9-4fa9-8570-e76d21c8e771" containerName="registry-server" Feb 02 11:01:07 crc kubenswrapper[4925]: E0202 11:01:07.557264 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c2cde5-148b-44c2-821a-a470122f1167" containerName="extract-content" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.557270 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c2cde5-148b-44c2-821a-a470122f1167" containerName="extract-content" Feb 02 11:01:07 crc kubenswrapper[4925]: E0202 11:01:07.557279 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf7432d2-d4a9-4fa9-8570-e76d21c8e771" containerName="extract-content" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.557285 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf7432d2-d4a9-4fa9-8570-e76d21c8e771" containerName="extract-content" Feb 02 11:01:07 crc kubenswrapper[4925]: E0202 11:01:07.557293 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c2cde5-148b-44c2-821a-a470122f1167" containerName="extract-utilities" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.557300 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c2cde5-148b-44c2-821a-a470122f1167" containerName="extract-utilities" Feb 02 11:01:07 crc kubenswrapper[4925]: E0202 11:01:07.557311 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e623d6f6-1bf2-43f4-a280-147617dbf9ef" containerName="collect-profiles" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.557317 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="e623d6f6-1bf2-43f4-a280-147617dbf9ef" containerName="collect-profiles" Feb 02 11:01:07 crc kubenswrapper[4925]: E0202 11:01:07.557326 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf7432d2-d4a9-4fa9-8570-e76d21c8e771" containerName="extract-utilities" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.557333 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf7432d2-d4a9-4fa9-8570-e76d21c8e771" containerName="extract-utilities" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.557422 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="46c2cde5-148b-44c2-821a-a470122f1167" containerName="registry-server" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.557436 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9577233-8642-44a5-98f5-0538ee3f57cd" containerName="pruner" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.557444 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf7432d2-d4a9-4fa9-8570-e76d21c8e771" containerName="registry-server" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.557456 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="e623d6f6-1bf2-43f4-a280-147617dbf9ef" containerName="collect-profiles" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.557815 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.589705 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.640989 4925 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.641543 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924" gracePeriod=15 Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.641609 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0" gracePeriod=15 Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.641687 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921" gracePeriod=15 Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.641714 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e" gracePeriod=15 Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.641683 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b" gracePeriod=15 Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.642039 4925 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 11:01:07 crc kubenswrapper[4925]: E0202 11:01:07.643055 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.643087 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 11:01:07 crc kubenswrapper[4925]: E0202 11:01:07.643108 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.643114 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 11:01:07 crc kubenswrapper[4925]: E0202 11:01:07.643146 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.643153 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 11:01:07 crc kubenswrapper[4925]: E0202 11:01:07.643164 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.643172 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 11:01:07 crc kubenswrapper[4925]: E0202 11:01:07.643181 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.644433 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 11:01:07 crc kubenswrapper[4925]: E0202 11:01:07.644454 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.644462 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 11:01:07 crc kubenswrapper[4925]: E0202 11:01:07.644488 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.644495 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 11:01:07 crc kubenswrapper[4925]: E0202 11:01:07.644501 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.644508 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.644610 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.644620 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.644628 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.644637 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.644643 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.644649 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.645014 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.670092 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.670169 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.670198 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.670240 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.670260 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.770865 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.770928 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.771049 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.771103 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.771160 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.771260 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.771290 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.771312 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.771370 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.771464 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.771506 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.771538 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.771566 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.872698 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.872747 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.872827 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.872836 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.872889 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.872965 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: I0202 11:01:07.884569 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:07 crc kubenswrapper[4925]: W0202 11:01:07.903538 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-09d1ce25b8048a78e5eb15c6486ae31870f8b2dc74f5e967cf85c50daf02c721 WatchSource:0}: Error finding container 09d1ce25b8048a78e5eb15c6486ae31870f8b2dc74f5e967cf85c50daf02c721: Status 404 returned error can't find the container with id 09d1ce25b8048a78e5eb15c6486ae31870f8b2dc74f5e967cf85c50daf02c721 Feb 02 11:01:07 crc kubenswrapper[4925]: E0202 11:01:07.907161 4925 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.195:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189068f7ff4dbb6e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 11:01:07.906100078 +0000 UTC m=+244.910349040,LastTimestamp:2026-02-02 11:01:07.906100078 +0000 UTC m=+244.910349040,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.368229 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.369770 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.370466 4925 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0" exitCode=0 Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.370494 4925 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e" exitCode=0 Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.370509 4925 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b" exitCode=0 Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.370518 4925 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921" exitCode=2 Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.370559 4925 scope.go:117] "RemoveContainer" containerID="3cdf7178365869a3abd5ec85f13b53e899b95e923c91106e26ad9f092baeb324" Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.371934 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"09d1ce25b8048a78e5eb15c6486ae31870f8b2dc74f5e967cf85c50daf02c721"} Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.374495 4925 generic.go:334] "Generic (PLEG): container finished" podID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" containerID="aae3985f1f8a03c25624048757109db5045ec7191fcb8c629b62768b9f5f4350" exitCode=0 Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.374559 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26nhd" event={"ID":"560d449d-bbfb-4f5b-a14f-4a26175a20d2","Type":"ContainerDied","Data":"aae3985f1f8a03c25624048757109db5045ec7191fcb8c629b62768b9f5f4350"} Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.376232 4925 generic.go:334] "Generic (PLEG): container finished" podID="b1472df2-041e-456c-b47a-fd15943af977" containerID="3b83a0b3240b0f742a4ba6f62b04b004da7d3d102bf70bd09194b5d92a39e6fa" exitCode=0 Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.376332 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b1472df2-041e-456c-b47a-fd15943af977","Type":"ContainerDied","Data":"3b83a0b3240b0f742a4ba6f62b04b004da7d3d102bf70bd09194b5d92a39e6fa"} Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.376882 4925 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.377056 4925 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.377368 4925 status_manager.go:851] "Failed to get status for pod" podUID="b1472df2-041e-456c-b47a-fd15943af977" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.671012 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf7432d2-d4a9-4fa9-8570-e76d21c8e771" path="/var/lib/kubelet/pods/bf7432d2-d4a9-4fa9-8570-e76d21c8e771/volumes" Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.680227 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26nhd" Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.681356 4925 status_manager.go:851] "Failed to get status for pod" podUID="b1472df2-041e-456c-b47a-fd15943af977" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.681635 4925 status_manager.go:851] "Failed to get status for pod" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" pod="openshift-marketplace/redhat-marketplace-26nhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-26nhd\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.682015 4925 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.783583 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/560d449d-bbfb-4f5b-a14f-4a26175a20d2-catalog-content\") pod \"560d449d-bbfb-4f5b-a14f-4a26175a20d2\" (UID: \"560d449d-bbfb-4f5b-a14f-4a26175a20d2\") " Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.783767 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/560d449d-bbfb-4f5b-a14f-4a26175a20d2-utilities\") pod \"560d449d-bbfb-4f5b-a14f-4a26175a20d2\" (UID: \"560d449d-bbfb-4f5b-a14f-4a26175a20d2\") " Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.783817 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr9gv\" (UniqueName: \"kubernetes.io/projected/560d449d-bbfb-4f5b-a14f-4a26175a20d2-kube-api-access-fr9gv\") pod \"560d449d-bbfb-4f5b-a14f-4a26175a20d2\" (UID: \"560d449d-bbfb-4f5b-a14f-4a26175a20d2\") " Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.784950 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/560d449d-bbfb-4f5b-a14f-4a26175a20d2-utilities" (OuterVolumeSpecName: "utilities") pod "560d449d-bbfb-4f5b-a14f-4a26175a20d2" (UID: "560d449d-bbfb-4f5b-a14f-4a26175a20d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.791006 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/560d449d-bbfb-4f5b-a14f-4a26175a20d2-kube-api-access-fr9gv" (OuterVolumeSpecName: "kube-api-access-fr9gv") pod "560d449d-bbfb-4f5b-a14f-4a26175a20d2" (UID: "560d449d-bbfb-4f5b-a14f-4a26175a20d2"). InnerVolumeSpecName "kube-api-access-fr9gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.803188 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/560d449d-bbfb-4f5b-a14f-4a26175a20d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "560d449d-bbfb-4f5b-a14f-4a26175a20d2" (UID: "560d449d-bbfb-4f5b-a14f-4a26175a20d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.884994 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/560d449d-bbfb-4f5b-a14f-4a26175a20d2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.885022 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/560d449d-bbfb-4f5b-a14f-4a26175a20d2-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:08 crc kubenswrapper[4925]: I0202 11:01:08.885034 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr9gv\" (UniqueName: \"kubernetes.io/projected/560d449d-bbfb-4f5b-a14f-4a26175a20d2-kube-api-access-fr9gv\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.385179 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"090e6ab0e2afa863f7c1331081845cf9357fe1c00933303cef1a5000b6221446"} Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.386129 4925 status_manager.go:851] "Failed to get status for pod" podUID="b1472df2-041e-456c-b47a-fd15943af977" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.386422 4925 status_manager.go:851] "Failed to get status for pod" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" pod="openshift-marketplace/redhat-marketplace-26nhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-26nhd\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.386795 4925 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.387236 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26nhd" event={"ID":"560d449d-bbfb-4f5b-a14f-4a26175a20d2","Type":"ContainerDied","Data":"059eee2aede03a3a21b0bee97a6cd8627095b4698c08b1c31476783cef7ebe55"} Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.387268 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26nhd" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.387281 4925 scope.go:117] "RemoveContainer" containerID="aae3985f1f8a03c25624048757109db5045ec7191fcb8c629b62768b9f5f4350" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.388508 4925 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.388721 4925 status_manager.go:851] "Failed to get status for pod" podUID="b1472df2-041e-456c-b47a-fd15943af977" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.388891 4925 status_manager.go:851] "Failed to get status for pod" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" pod="openshift-marketplace/redhat-marketplace-26nhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-26nhd\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.390024 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.399499 4925 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.400327 4925 status_manager.go:851] "Failed to get status for pod" podUID="b1472df2-041e-456c-b47a-fd15943af977" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.400693 4925 status_manager.go:851] "Failed to get status for pod" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" pod="openshift-marketplace/redhat-marketplace-26nhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-26nhd\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.408559 4925 scope.go:117] "RemoveContainer" containerID="a97cd00301c5057dac8cf80ba330a25e589377010fe8efe602b76addaa7808fb" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.442234 4925 scope.go:117] "RemoveContainer" containerID="4a7b1b5c703be17e1c14eeaa3d844f6bc932cb3ab75539b12ec1def1b34ca179" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.627953 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.628489 4925 status_manager.go:851] "Failed to get status for pod" podUID="b1472df2-041e-456c-b47a-fd15943af977" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.628851 4925 status_manager.go:851] "Failed to get status for pod" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" pod="openshift-marketplace/redhat-marketplace-26nhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-26nhd\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.629154 4925 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.693473 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1472df2-041e-456c-b47a-fd15943af977-kubelet-dir\") pod \"b1472df2-041e-456c-b47a-fd15943af977\" (UID: \"b1472df2-041e-456c-b47a-fd15943af977\") " Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.693540 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1472df2-041e-456c-b47a-fd15943af977-var-lock\") pod \"b1472df2-041e-456c-b47a-fd15943af977\" (UID: \"b1472df2-041e-456c-b47a-fd15943af977\") " Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.693580 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1472df2-041e-456c-b47a-fd15943af977-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b1472df2-041e-456c-b47a-fd15943af977" (UID: "b1472df2-041e-456c-b47a-fd15943af977"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.693666 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1472df2-041e-456c-b47a-fd15943af977-var-lock" (OuterVolumeSpecName: "var-lock") pod "b1472df2-041e-456c-b47a-fd15943af977" (UID: "b1472df2-041e-456c-b47a-fd15943af977"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.693679 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1472df2-041e-456c-b47a-fd15943af977-kube-api-access\") pod \"b1472df2-041e-456c-b47a-fd15943af977\" (UID: \"b1472df2-041e-456c-b47a-fd15943af977\") " Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.694112 4925 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1472df2-041e-456c-b47a-fd15943af977-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.694145 4925 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1472df2-041e-456c-b47a-fd15943af977-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.698245 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1472df2-041e-456c-b47a-fd15943af977-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b1472df2-041e-456c-b47a-fd15943af977" (UID: "b1472df2-041e-456c-b47a-fd15943af977"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.795764 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1472df2-041e-456c-b47a-fd15943af977-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.991908 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.992785 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.994132 4925 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.994616 4925 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.994953 4925 status_manager.go:851] "Failed to get status for pod" podUID="b1472df2-041e-456c-b47a-fd15943af977" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:09 crc kubenswrapper[4925]: I0202 11:01:09.995323 4925 status_manager.go:851] "Failed to get status for pod" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" pod="openshift-marketplace/redhat-marketplace-26nhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-26nhd\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.098465 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.098542 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.098616 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.098636 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.098701 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.098727 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.099133 4925 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.099150 4925 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.099158 4925 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.399985 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.400003 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b1472df2-041e-456c-b47a-fd15943af977","Type":"ContainerDied","Data":"8a146995c608771935e8ff137367db3a0b181768c3c999f23f1cef8e34dc0933"} Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.400068 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a146995c608771935e8ff137367db3a0b181768c3c999f23f1cef8e34dc0933" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.404575 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.405593 4925 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924" exitCode=0 Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.405795 4925 scope.go:117] "RemoveContainer" containerID="81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.405852 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.426702 4925 status_manager.go:851] "Failed to get status for pod" podUID="b1472df2-041e-456c-b47a-fd15943af977" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.427275 4925 status_manager.go:851] "Failed to get status for pod" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" pod="openshift-marketplace/redhat-marketplace-26nhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-26nhd\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.427758 4925 scope.go:117] "RemoveContainer" containerID="3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.427756 4925 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.428217 4925 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.428763 4925 status_manager.go:851] "Failed to get status for pod" podUID="b1472df2-041e-456c-b47a-fd15943af977" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.429184 4925 status_manager.go:851] "Failed to get status for pod" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" pod="openshift-marketplace/redhat-marketplace-26nhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-26nhd\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.429409 4925 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.429748 4925 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.440715 4925 scope.go:117] "RemoveContainer" containerID="ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.453772 4925 scope.go:117] "RemoveContainer" containerID="4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.475304 4925 scope.go:117] "RemoveContainer" containerID="a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.494135 4925 scope.go:117] "RemoveContainer" containerID="f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.507643 4925 scope.go:117] "RemoveContainer" containerID="81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0" Feb 02 11:01:10 crc kubenswrapper[4925]: E0202 11:01:10.507964 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\": container with ID starting with 81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0 not found: ID does not exist" containerID="81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.508000 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0"} err="failed to get container status \"81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\": rpc error: code = NotFound desc = could not find container \"81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0\": container with ID starting with 81df33e0d3cef3dd7c4419096314ddcba404566a501d49eda728ce58f11d3cc0 not found: ID does not exist" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.508027 4925 scope.go:117] "RemoveContainer" containerID="3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e" Feb 02 11:01:10 crc kubenswrapper[4925]: E0202 11:01:10.508805 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\": container with ID starting with 3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e not found: ID does not exist" containerID="3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.508836 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e"} err="failed to get container status \"3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\": rpc error: code = NotFound desc = could not find container \"3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e\": container with ID starting with 3da1d2dc80b743e4b834310411e4d0a9eb26f7a3adb2de038cb2cdd962ba201e not found: ID does not exist" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.508855 4925 scope.go:117] "RemoveContainer" containerID="ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b" Feb 02 11:01:10 crc kubenswrapper[4925]: E0202 11:01:10.509358 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\": container with ID starting with ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b not found: ID does not exist" containerID="ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.509426 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b"} err="failed to get container status \"ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\": rpc error: code = NotFound desc = could not find container \"ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b\": container with ID starting with ef1168cc32df1023edf7765b85cd4b35f2bc92f0b83c0d98c477cd34f4f2ed4b not found: ID does not exist" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.509444 4925 scope.go:117] "RemoveContainer" containerID="4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921" Feb 02 11:01:10 crc kubenswrapper[4925]: E0202 11:01:10.509760 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\": container with ID starting with 4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921 not found: ID does not exist" containerID="4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.509865 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921"} err="failed to get container status \"4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\": rpc error: code = NotFound desc = could not find container \"4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921\": container with ID starting with 4ef23777ea1a70bdcd655065f0a2bdde5fb5f68ede262b1ca8830789abebe921 not found: ID does not exist" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.509980 4925 scope.go:117] "RemoveContainer" containerID="a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924" Feb 02 11:01:10 crc kubenswrapper[4925]: E0202 11:01:10.510592 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\": container with ID starting with a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924 not found: ID does not exist" containerID="a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.510620 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924"} err="failed to get container status \"a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\": rpc error: code = NotFound desc = could not find container \"a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924\": container with ID starting with a20f2ce44bb9a0f03fc4495771fcdc4d336079280d115a3a775cfc5c25ccb924 not found: ID does not exist" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.510636 4925 scope.go:117] "RemoveContainer" containerID="f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945" Feb 02 11:01:10 crc kubenswrapper[4925]: E0202 11:01:10.510877 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\": container with ID starting with f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945 not found: ID does not exist" containerID="f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.510980 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945"} err="failed to get container status \"f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\": rpc error: code = NotFound desc = could not find container \"f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945\": container with ID starting with f64c8e7d702f7d1dbb5011819b6832e51b66aaf2f7d4213c67676f56fe786945 not found: ID does not exist" Feb 02 11:01:10 crc kubenswrapper[4925]: I0202 11:01:10.675473 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 02 11:01:12 crc kubenswrapper[4925]: E0202 11:01:12.514137 4925 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:12 crc kubenswrapper[4925]: E0202 11:01:12.514761 4925 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:12 crc kubenswrapper[4925]: E0202 11:01:12.515407 4925 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:12 crc kubenswrapper[4925]: E0202 11:01:12.515831 4925 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:12 crc kubenswrapper[4925]: E0202 11:01:12.516324 4925 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:12 crc kubenswrapper[4925]: I0202 11:01:12.516383 4925 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 02 11:01:12 crc kubenswrapper[4925]: E0202 11:01:12.516760 4925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="200ms" Feb 02 11:01:12 crc kubenswrapper[4925]: E0202 11:01:12.717826 4925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="400ms" Feb 02 11:01:13 crc kubenswrapper[4925]: E0202 11:01:13.118485 4925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="800ms" Feb 02 11:01:13 crc kubenswrapper[4925]: E0202 11:01:13.885524 4925 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.195:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189068f7ff4dbb6e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 11:01:07.906100078 +0000 UTC m=+244.910349040,LastTimestamp:2026-02-02 11:01:07.906100078 +0000 UTC m=+244.910349040,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 11:01:13 crc kubenswrapper[4925]: E0202 11:01:13.919037 4925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="1.6s" Feb 02 11:01:14 crc kubenswrapper[4925]: I0202 11:01:14.666213 4925 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:14 crc kubenswrapper[4925]: I0202 11:01:14.666650 4925 status_manager.go:851] "Failed to get status for pod" podUID="b1472df2-041e-456c-b47a-fd15943af977" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:14 crc kubenswrapper[4925]: I0202 11:01:14.667054 4925 status_manager.go:851] "Failed to get status for pod" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" pod="openshift-marketplace/redhat-marketplace-26nhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-26nhd\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:15 crc kubenswrapper[4925]: E0202 11:01:15.520466 4925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="3.2s" Feb 02 11:01:18 crc kubenswrapper[4925]: E0202 11:01:18.721847 4925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="6.4s" Feb 02 11:01:20 crc kubenswrapper[4925]: I0202 11:01:20.663602 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:20 crc kubenswrapper[4925]: I0202 11:01:20.666209 4925 status_manager.go:851] "Failed to get status for pod" podUID="b1472df2-041e-456c-b47a-fd15943af977" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:20 crc kubenswrapper[4925]: I0202 11:01:20.666767 4925 status_manager.go:851] "Failed to get status for pod" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" pod="openshift-marketplace/redhat-marketplace-26nhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-26nhd\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:20 crc kubenswrapper[4925]: I0202 11:01:20.668561 4925 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:20 crc kubenswrapper[4925]: I0202 11:01:20.694259 4925 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="929e2376-c9ca-4fd7-95cc-53d1e78a7480" Feb 02 11:01:20 crc kubenswrapper[4925]: I0202 11:01:20.694310 4925 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="929e2376-c9ca-4fd7-95cc-53d1e78a7480" Feb 02 11:01:20 crc kubenswrapper[4925]: E0202 11:01:20.694736 4925 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:20 crc kubenswrapper[4925]: I0202 11:01:20.695265 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:21 crc kubenswrapper[4925]: I0202 11:01:21.501185 4925 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1de17fb57d337ebe5f2341f9d8a8ae716cfdc0b452f0c7f8891047dc85efb6e9" exitCode=0 Feb 02 11:01:21 crc kubenswrapper[4925]: I0202 11:01:21.501318 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1de17fb57d337ebe5f2341f9d8a8ae716cfdc0b452f0c7f8891047dc85efb6e9"} Feb 02 11:01:21 crc kubenswrapper[4925]: I0202 11:01:21.501511 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c36e2fff732d54dfae3e00af0d1523999a7853f243f21eac8acf731457e2b8f1"} Feb 02 11:01:21 crc kubenswrapper[4925]: I0202 11:01:21.501795 4925 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="929e2376-c9ca-4fd7-95cc-53d1e78a7480" Feb 02 11:01:21 crc kubenswrapper[4925]: I0202 11:01:21.501811 4925 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="929e2376-c9ca-4fd7-95cc-53d1e78a7480" Feb 02 11:01:21 crc kubenswrapper[4925]: E0202 11:01:21.502311 4925 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:21 crc kubenswrapper[4925]: I0202 11:01:21.503634 4925 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:21 crc kubenswrapper[4925]: I0202 11:01:21.504982 4925 status_manager.go:851] "Failed to get status for pod" podUID="b1472df2-041e-456c-b47a-fd15943af977" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:21 crc kubenswrapper[4925]: I0202 11:01:21.505547 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 11:01:21 crc kubenswrapper[4925]: I0202 11:01:21.505580 4925 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7" exitCode=1 Feb 02 11:01:21 crc kubenswrapper[4925]: I0202 11:01:21.505600 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7"} Feb 02 11:01:21 crc kubenswrapper[4925]: I0202 11:01:21.505844 4925 scope.go:117] "RemoveContainer" containerID="2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7" Feb 02 11:01:21 crc kubenswrapper[4925]: I0202 11:01:21.505824 4925 status_manager.go:851] "Failed to get status for pod" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" pod="openshift-marketplace/redhat-marketplace-26nhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-26nhd\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:21 crc kubenswrapper[4925]: I0202 11:01:21.507055 4925 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:21 crc kubenswrapper[4925]: I0202 11:01:21.507305 4925 status_manager.go:851] "Failed to get status for pod" podUID="b1472df2-041e-456c-b47a-fd15943af977" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:21 crc kubenswrapper[4925]: I0202 11:01:21.508482 4925 status_manager.go:851] "Failed to get status for pod" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" pod="openshift-marketplace/redhat-marketplace-26nhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-26nhd\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:21 crc kubenswrapper[4925]: I0202 11:01:21.508858 4925 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Feb 02 11:01:22 crc kubenswrapper[4925]: I0202 11:01:22.517625 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 11:01:22 crc kubenswrapper[4925]: I0202 11:01:22.522355 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 11:01:22 crc kubenswrapper[4925]: I0202 11:01:22.522818 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e8c524473f10523b57b6bbf18a21b7b2b38bb9655e879a0b1cbb53d21a19474b"} Feb 02 11:01:22 crc kubenswrapper[4925]: I0202 11:01:22.537957 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"72ca94f1c5bdaeee3526ef60890ece952868f43b1e6a46ea3f04b98af7942865"} Feb 02 11:01:22 crc kubenswrapper[4925]: I0202 11:01:22.538015 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c47a72e94eb9b7982f1b6cd84b7014ab20e82cad39f2edbf70838d9a15982a53"} Feb 02 11:01:22 crc kubenswrapper[4925]: I0202 11:01:22.538034 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5345f8c6d56357811fe06925af237bac628d9eaf45841d13fa1304b33e979b59"} Feb 02 11:01:23 crc kubenswrapper[4925]: I0202 11:01:23.549244 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fabb774a43c765d03e91fc7edd6d99c216179283dd26e616e62e71cc1fc25adb"} Feb 02 11:01:23 crc kubenswrapper[4925]: I0202 11:01:23.550694 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ff121b5f5636b69c535790d6dfd171295c0090431056f184621be4afde9476a5"} Feb 02 11:01:23 crc kubenswrapper[4925]: I0202 11:01:23.550884 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:23 crc kubenswrapper[4925]: I0202 11:01:23.549758 4925 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="929e2376-c9ca-4fd7-95cc-53d1e78a7480" Feb 02 11:01:23 crc kubenswrapper[4925]: I0202 11:01:23.551233 4925 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="929e2376-c9ca-4fd7-95cc-53d1e78a7480" Feb 02 11:01:24 crc kubenswrapper[4925]: I0202 11:01:24.780061 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 11:01:25 crc kubenswrapper[4925]: I0202 11:01:25.696325 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:25 crc kubenswrapper[4925]: I0202 11:01:25.696412 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:25 crc kubenswrapper[4925]: I0202 11:01:25.703743 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:25 crc kubenswrapper[4925]: I0202 11:01:25.808440 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 11:01:25 crc kubenswrapper[4925]: I0202 11:01:25.808884 4925 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 11:01:25 crc kubenswrapper[4925]: I0202 11:01:25.809496 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 11:01:27 crc kubenswrapper[4925]: I0202 11:01:27.690980 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" podUID="686d6cf9-761e-4394-ab8c-316841705a26" containerName="oauth-openshift" containerID="cri-o://891d6ea319b9c9548c0c24ee453b664becb4b4835d28b05942a9f4edaafd1778" gracePeriod=15 Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.111547 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.181470 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-router-certs\") pod \"686d6cf9-761e-4394-ab8c-316841705a26\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.181572 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9fch\" (UniqueName: \"kubernetes.io/projected/686d6cf9-761e-4394-ab8c-316841705a26-kube-api-access-c9fch\") pod \"686d6cf9-761e-4394-ab8c-316841705a26\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.181596 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-ocp-branding-template\") pod \"686d6cf9-761e-4394-ab8c-316841705a26\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.181624 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-service-ca\") pod \"686d6cf9-761e-4394-ab8c-316841705a26\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.181652 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-login\") pod \"686d6cf9-761e-4394-ab8c-316841705a26\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.181684 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-audit-policies\") pod \"686d6cf9-761e-4394-ab8c-316841705a26\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.181730 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-idp-0-file-data\") pod \"686d6cf9-761e-4394-ab8c-316841705a26\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.181770 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-trusted-ca-bundle\") pod \"686d6cf9-761e-4394-ab8c-316841705a26\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.181803 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-serving-cert\") pod \"686d6cf9-761e-4394-ab8c-316841705a26\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.181842 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-provider-selection\") pod \"686d6cf9-761e-4394-ab8c-316841705a26\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.181897 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-cliconfig\") pod \"686d6cf9-761e-4394-ab8c-316841705a26\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.181942 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/686d6cf9-761e-4394-ab8c-316841705a26-audit-dir\") pod \"686d6cf9-761e-4394-ab8c-316841705a26\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.181970 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-error\") pod \"686d6cf9-761e-4394-ab8c-316841705a26\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.182001 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-session\") pod \"686d6cf9-761e-4394-ab8c-316841705a26\" (UID: \"686d6cf9-761e-4394-ab8c-316841705a26\") " Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.183127 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "686d6cf9-761e-4394-ab8c-316841705a26" (UID: "686d6cf9-761e-4394-ab8c-316841705a26"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.183192 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686d6cf9-761e-4394-ab8c-316841705a26-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "686d6cf9-761e-4394-ab8c-316841705a26" (UID: "686d6cf9-761e-4394-ab8c-316841705a26"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.183256 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "686d6cf9-761e-4394-ab8c-316841705a26" (UID: "686d6cf9-761e-4394-ab8c-316841705a26"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.184467 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "686d6cf9-761e-4394-ab8c-316841705a26" (UID: "686d6cf9-761e-4394-ab8c-316841705a26"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.184903 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "686d6cf9-761e-4394-ab8c-316841705a26" (UID: "686d6cf9-761e-4394-ab8c-316841705a26"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.188449 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "686d6cf9-761e-4394-ab8c-316841705a26" (UID: "686d6cf9-761e-4394-ab8c-316841705a26"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.188843 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "686d6cf9-761e-4394-ab8c-316841705a26" (UID: "686d6cf9-761e-4394-ab8c-316841705a26"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.189152 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "686d6cf9-761e-4394-ab8c-316841705a26" (UID: "686d6cf9-761e-4394-ab8c-316841705a26"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.189383 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "686d6cf9-761e-4394-ab8c-316841705a26" (UID: "686d6cf9-761e-4394-ab8c-316841705a26"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.189497 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "686d6cf9-761e-4394-ab8c-316841705a26" (UID: "686d6cf9-761e-4394-ab8c-316841705a26"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.189937 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "686d6cf9-761e-4394-ab8c-316841705a26" (UID: "686d6cf9-761e-4394-ab8c-316841705a26"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.189964 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "686d6cf9-761e-4394-ab8c-316841705a26" (UID: "686d6cf9-761e-4394-ab8c-316841705a26"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.190192 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/686d6cf9-761e-4394-ab8c-316841705a26-kube-api-access-c9fch" (OuterVolumeSpecName: "kube-api-access-c9fch") pod "686d6cf9-761e-4394-ab8c-316841705a26" (UID: "686d6cf9-761e-4394-ab8c-316841705a26"). InnerVolumeSpecName "kube-api-access-c9fch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.190352 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "686d6cf9-761e-4394-ab8c-316841705a26" (UID: "686d6cf9-761e-4394-ab8c-316841705a26"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.284602 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9fch\" (UniqueName: \"kubernetes.io/projected/686d6cf9-761e-4394-ab8c-316841705a26-kube-api-access-c9fch\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.284639 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.284654 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.284668 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.284681 4925 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.284692 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.284706 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.284717 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.284729 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.284742 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.284765 4925 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/686d6cf9-761e-4394-ab8c-316841705a26-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.284779 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.284793 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.284804 4925 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/686d6cf9-761e-4394-ab8c-316841705a26-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.560890 4925 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.584872 4925 generic.go:334] "Generic (PLEG): container finished" podID="686d6cf9-761e-4394-ab8c-316841705a26" containerID="891d6ea319b9c9548c0c24ee453b664becb4b4835d28b05942a9f4edaafd1778" exitCode=0 Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.584964 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.584962 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" event={"ID":"686d6cf9-761e-4394-ab8c-316841705a26","Type":"ContainerDied","Data":"891d6ea319b9c9548c0c24ee453b664becb4b4835d28b05942a9f4edaafd1778"} Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.585105 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7gsrw" event={"ID":"686d6cf9-761e-4394-ab8c-316841705a26","Type":"ContainerDied","Data":"3dad6c612e79b4f43d13ca6bbc05b5af3d0be24315a5a57bcd98535006011687"} Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.585133 4925 scope.go:117] "RemoveContainer" containerID="891d6ea319b9c9548c0c24ee453b664becb4b4835d28b05942a9f4edaafd1778" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.585745 4925 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="929e2376-c9ca-4fd7-95cc-53d1e78a7480" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.585761 4925 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="929e2376-c9ca-4fd7-95cc-53d1e78a7480" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.596225 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.598576 4925 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e3eff313-500c-4d45-84dd-7ee11dcbfd98" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.607359 4925 scope.go:117] "RemoveContainer" containerID="891d6ea319b9c9548c0c24ee453b664becb4b4835d28b05942a9f4edaafd1778" Feb 02 11:01:28 crc kubenswrapper[4925]: E0202 11:01:28.607804 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"891d6ea319b9c9548c0c24ee453b664becb4b4835d28b05942a9f4edaafd1778\": container with ID starting with 891d6ea319b9c9548c0c24ee453b664becb4b4835d28b05942a9f4edaafd1778 not found: ID does not exist" containerID="891d6ea319b9c9548c0c24ee453b664becb4b4835d28b05942a9f4edaafd1778" Feb 02 11:01:28 crc kubenswrapper[4925]: I0202 11:01:28.607842 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"891d6ea319b9c9548c0c24ee453b664becb4b4835d28b05942a9f4edaafd1778"} err="failed to get container status \"891d6ea319b9c9548c0c24ee453b664becb4b4835d28b05942a9f4edaafd1778\": rpc error: code = NotFound desc = could not find container \"891d6ea319b9c9548c0c24ee453b664becb4b4835d28b05942a9f4edaafd1778\": container with ID starting with 891d6ea319b9c9548c0c24ee453b664becb4b4835d28b05942a9f4edaafd1778 not found: ID does not exist" Feb 02 11:01:29 crc kubenswrapper[4925]: I0202 11:01:29.591790 4925 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="929e2376-c9ca-4fd7-95cc-53d1e78a7480" Feb 02 11:01:29 crc kubenswrapper[4925]: I0202 11:01:29.591825 4925 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="929e2376-c9ca-4fd7-95cc-53d1e78a7480" Feb 02 11:01:34 crc kubenswrapper[4925]: I0202 11:01:34.689624 4925 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e3eff313-500c-4d45-84dd-7ee11dcbfd98" Feb 02 11:01:35 crc kubenswrapper[4925]: I0202 11:01:35.808186 4925 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 11:01:35 crc kubenswrapper[4925]: I0202 11:01:35.808554 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 11:01:37 crc kubenswrapper[4925]: I0202 11:01:37.873680 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 11:01:38 crc kubenswrapper[4925]: I0202 11:01:38.891382 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 11:01:38 crc kubenswrapper[4925]: I0202 11:01:38.921557 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 11:01:39 crc kubenswrapper[4925]: I0202 11:01:39.415836 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 11:01:39 crc kubenswrapper[4925]: I0202 11:01:39.639794 4925 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 11:01:40 crc kubenswrapper[4925]: I0202 11:01:40.162678 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 11:01:40 crc kubenswrapper[4925]: I0202 11:01:40.541686 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 11:01:40 crc kubenswrapper[4925]: I0202 11:01:40.636969 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 11:01:40 crc kubenswrapper[4925]: I0202 11:01:40.741690 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 11:01:40 crc kubenswrapper[4925]: I0202 11:01:40.767327 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 11:01:40 crc kubenswrapper[4925]: I0202 11:01:40.815206 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 11:01:40 crc kubenswrapper[4925]: I0202 11:01:40.880921 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 11:01:41 crc kubenswrapper[4925]: I0202 11:01:41.110733 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 11:01:41 crc kubenswrapper[4925]: I0202 11:01:41.478886 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 11:01:41 crc kubenswrapper[4925]: I0202 11:01:41.501663 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 11:01:41 crc kubenswrapper[4925]: I0202 11:01:41.531886 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 11:01:41 crc kubenswrapper[4925]: I0202 11:01:41.540411 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 11:01:41 crc kubenswrapper[4925]: I0202 11:01:41.570589 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 11:01:41 crc kubenswrapper[4925]: I0202 11:01:41.707784 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 11:01:41 crc kubenswrapper[4925]: I0202 11:01:41.824942 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 11:01:41 crc kubenswrapper[4925]: I0202 11:01:41.855693 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 11:01:41 crc kubenswrapper[4925]: I0202 11:01:41.915261 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 11:01:41 crc kubenswrapper[4925]: I0202 11:01:41.999326 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 11:01:42 crc kubenswrapper[4925]: I0202 11:01:42.051190 4925 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 11:01:42 crc kubenswrapper[4925]: I0202 11:01:42.051950 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 11:01:42 crc kubenswrapper[4925]: I0202 11:01:42.323583 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 11:01:42 crc kubenswrapper[4925]: I0202 11:01:42.511258 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 11:01:42 crc kubenswrapper[4925]: I0202 11:01:42.567482 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 11:01:42 crc kubenswrapper[4925]: I0202 11:01:42.588035 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 11:01:42 crc kubenswrapper[4925]: I0202 11:01:42.649824 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 11:01:42 crc kubenswrapper[4925]: I0202 11:01:42.665336 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 11:01:42 crc kubenswrapper[4925]: I0202 11:01:42.721340 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 11:01:42 crc kubenswrapper[4925]: I0202 11:01:42.781704 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 11:01:42 crc kubenswrapper[4925]: I0202 11:01:42.868578 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.043745 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.058495 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.239579 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.301412 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.322632 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.336380 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.346437 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.369269 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.444189 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.449546 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.652671 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.673342 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.801847 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.810980 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.844627 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.882853 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.884925 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.954035 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.967333 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.977034 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 11:01:43 crc kubenswrapper[4925]: I0202 11:01:43.977730 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 11:01:44 crc kubenswrapper[4925]: I0202 11:01:44.085563 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 11:01:44 crc kubenswrapper[4925]: I0202 11:01:44.136234 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 11:01:44 crc kubenswrapper[4925]: I0202 11:01:44.167770 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 11:01:44 crc kubenswrapper[4925]: I0202 11:01:44.228258 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 11:01:44 crc kubenswrapper[4925]: I0202 11:01:44.266309 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 11:01:44 crc kubenswrapper[4925]: I0202 11:01:44.300037 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 11:01:44 crc kubenswrapper[4925]: I0202 11:01:44.408960 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 11:01:44 crc kubenswrapper[4925]: I0202 11:01:44.474987 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 11:01:44 crc kubenswrapper[4925]: I0202 11:01:44.621828 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 11:01:44 crc kubenswrapper[4925]: I0202 11:01:44.634361 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 11:01:44 crc kubenswrapper[4925]: I0202 11:01:44.682266 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 11:01:44 crc kubenswrapper[4925]: I0202 11:01:44.722396 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 11:01:44 crc kubenswrapper[4925]: I0202 11:01:44.740668 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 11:01:44 crc kubenswrapper[4925]: I0202 11:01:44.859190 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 11:01:44 crc kubenswrapper[4925]: I0202 11:01:44.926459 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.043998 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.137719 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.182236 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.194877 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.221187 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.261434 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.388005 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.394628 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.396802 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.410555 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.415584 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.473332 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.473880 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.494358 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.605120 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.659468 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.688824 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.769739 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.793285 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.808688 4925 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.808737 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.808781 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.809326 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e8c524473f10523b57b6bbf18a21b7b2b38bb9655e879a0b1cbb53d21a19474b"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.809425 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://e8c524473f10523b57b6bbf18a21b7b2b38bb9655e879a0b1cbb53d21a19474b" gracePeriod=30 Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.947610 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.954035 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 11:01:45 crc kubenswrapper[4925]: I0202 11:01:45.980186 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.124417 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.158352 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.218576 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.299328 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.385527 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.430832 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.622977 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.670484 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.676700 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.692550 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.767515 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.821594 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.833938 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.872133 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.880154 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.921782 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.928241 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 11:01:46 crc kubenswrapper[4925]: I0202 11:01:46.928520 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.133686 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.179045 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.185343 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.225841 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.277794 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.279424 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.332742 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.336742 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.368767 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.422251 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.431427 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.509179 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.546227 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.574832 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.615214 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.617381 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.632154 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.717446 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.762035 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.771193 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.798499 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.799610 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.806083 4925 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.808421 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=40.808402907 podStartE2EDuration="40.808402907s" podCreationTimestamp="2026-02-02 11:01:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:01:28.299251228 +0000 UTC m=+265.303500200" watchObservedRunningTime="2026-02-02 11:01:47.808402907 +0000 UTC m=+284.812651869" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.811338 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-26nhd","openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-7gsrw"] Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.811403 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.815882 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.842256 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.842238988 podStartE2EDuration="19.842238988s" podCreationTimestamp="2026-02-02 11:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:01:47.840413159 +0000 UTC m=+284.844662131" watchObservedRunningTime="2026-02-02 11:01:47.842238988 +0000 UTC m=+284.846487960" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.905901 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.929390 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.968617 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 11:01:47 crc kubenswrapper[4925]: I0202 11:01:47.985843 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 11:01:48 crc kubenswrapper[4925]: I0202 11:01:48.066888 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 11:01:48 crc kubenswrapper[4925]: I0202 11:01:48.185181 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 11:01:48 crc kubenswrapper[4925]: I0202 11:01:48.300043 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 11:01:48 crc kubenswrapper[4925]: I0202 11:01:48.373133 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 11:01:48 crc kubenswrapper[4925]: I0202 11:01:48.385262 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 11:01:48 crc kubenswrapper[4925]: I0202 11:01:48.447654 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 11:01:48 crc kubenswrapper[4925]: I0202 11:01:48.456259 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 11:01:48 crc kubenswrapper[4925]: I0202 11:01:48.487594 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 11:01:48 crc kubenswrapper[4925]: I0202 11:01:48.579843 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 11:01:48 crc kubenswrapper[4925]: I0202 11:01:48.582297 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 11:01:48 crc kubenswrapper[4925]: I0202 11:01:48.602818 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 11:01:48 crc kubenswrapper[4925]: I0202 11:01:48.640429 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 11:01:48 crc kubenswrapper[4925]: I0202 11:01:48.671113 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" path="/var/lib/kubelet/pods/560d449d-bbfb-4f5b-a14f-4a26175a20d2/volumes" Feb 02 11:01:48 crc kubenswrapper[4925]: I0202 11:01:48.671751 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="686d6cf9-761e-4394-ab8c-316841705a26" path="/var/lib/kubelet/pods/686d6cf9-761e-4394-ab8c-316841705a26/volumes" Feb 02 11:01:48 crc kubenswrapper[4925]: I0202 11:01:48.741732 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 11:01:48 crc kubenswrapper[4925]: I0202 11:01:48.968464 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 11:01:48 crc kubenswrapper[4925]: I0202 11:01:48.991510 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 11:01:49 crc kubenswrapper[4925]: I0202 11:01:49.002523 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 11:01:49 crc kubenswrapper[4925]: I0202 11:01:49.064845 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 11:01:49 crc kubenswrapper[4925]: I0202 11:01:49.210806 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 11:01:49 crc kubenswrapper[4925]: I0202 11:01:49.264883 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 11:01:49 crc kubenswrapper[4925]: I0202 11:01:49.466271 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 11:01:49 crc kubenswrapper[4925]: I0202 11:01:49.495687 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 11:01:49 crc kubenswrapper[4925]: I0202 11:01:49.541122 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 11:01:49 crc kubenswrapper[4925]: I0202 11:01:49.609650 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 11:01:49 crc kubenswrapper[4925]: I0202 11:01:49.721019 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 11:01:49 crc kubenswrapper[4925]: I0202 11:01:49.728458 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 11:01:49 crc kubenswrapper[4925]: I0202 11:01:49.744989 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 11:01:49 crc kubenswrapper[4925]: I0202 11:01:49.807673 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 11:01:49 crc kubenswrapper[4925]: I0202 11:01:49.932967 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.095171 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.162594 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.208628 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.212527 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.238560 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.267646 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.388723 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.533764 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.534461 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.549937 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.652792 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.681496 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.733512 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.841539 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.948828 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.956677 4925 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.956935 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://090e6ab0e2afa863f7c1331081845cf9357fe1c00933303cef1a5000b6221446" gracePeriod=5 Feb 02 11:01:50 crc kubenswrapper[4925]: I0202 11:01:50.999052 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 11:01:51 crc kubenswrapper[4925]: I0202 11:01:51.061869 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 11:01:51 crc kubenswrapper[4925]: I0202 11:01:51.079711 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 11:01:51 crc kubenswrapper[4925]: I0202 11:01:51.085065 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 11:01:51 crc kubenswrapper[4925]: I0202 11:01:51.086317 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 11:01:51 crc kubenswrapper[4925]: I0202 11:01:51.159897 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 11:01:51 crc kubenswrapper[4925]: I0202 11:01:51.276622 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 11:01:51 crc kubenswrapper[4925]: I0202 11:01:51.322225 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 11:01:51 crc kubenswrapper[4925]: I0202 11:01:51.400419 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 11:01:51 crc kubenswrapper[4925]: I0202 11:01:51.443988 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 11:01:51 crc kubenswrapper[4925]: I0202 11:01:51.452643 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 11:01:51 crc kubenswrapper[4925]: I0202 11:01:51.561040 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 11:01:51 crc kubenswrapper[4925]: I0202 11:01:51.617732 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 11:01:51 crc kubenswrapper[4925]: I0202 11:01:51.675124 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 11:01:51 crc kubenswrapper[4925]: I0202 11:01:51.779634 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 11:01:51 crc kubenswrapper[4925]: I0202 11:01:51.796116 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 11:01:51 crc kubenswrapper[4925]: I0202 11:01:51.948045 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 11:01:51 crc kubenswrapper[4925]: I0202 11:01:51.948557 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.017937 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.069577 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-96fb489c7-lxm25"] Feb 02 11:01:52 crc kubenswrapper[4925]: E0202 11:01:52.069795 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.069809 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 11:01:52 crc kubenswrapper[4925]: E0202 11:01:52.069822 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686d6cf9-761e-4394-ab8c-316841705a26" containerName="oauth-openshift" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.069831 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="686d6cf9-761e-4394-ab8c-316841705a26" containerName="oauth-openshift" Feb 02 11:01:52 crc kubenswrapper[4925]: E0202 11:01:52.069846 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" containerName="extract-utilities" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.069853 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" containerName="extract-utilities" Feb 02 11:01:52 crc kubenswrapper[4925]: E0202 11:01:52.069866 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1472df2-041e-456c-b47a-fd15943af977" containerName="installer" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.069875 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1472df2-041e-456c-b47a-fd15943af977" containerName="installer" Feb 02 11:01:52 crc kubenswrapper[4925]: E0202 11:01:52.069892 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" containerName="extract-content" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.069900 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" containerName="extract-content" Feb 02 11:01:52 crc kubenswrapper[4925]: E0202 11:01:52.069910 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" containerName="registry-server" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.069917 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" containerName="registry-server" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.070026 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1472df2-041e-456c-b47a-fd15943af977" containerName="installer" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.070037 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="560d449d-bbfb-4f5b-a14f-4a26175a20d2" containerName="registry-server" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.070051 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="686d6cf9-761e-4394-ab8c-316841705a26" containerName="oauth-openshift" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.070063 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.070500 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.073412 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.073584 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.073780 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.073802 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.073959 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.074192 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.077126 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.077392 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.077775 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.077781 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.077934 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.078110 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.109747 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.111547 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.111678 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.120833 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhktn\" (UniqueName: \"kubernetes.io/projected/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-kube-api-access-lhktn\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.120891 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-router-certs\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.120929 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.120952 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-user-template-error\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.120974 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-audit-dir\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.120994 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-service-ca\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.121013 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.121037 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.121108 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.121132 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-user-template-login\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.121159 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-audit-policies\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.121184 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-session\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.121210 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.121232 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.143507 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-96fb489c7-lxm25"] Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.159563 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.219506 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.222237 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.222281 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.222333 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhktn\" (UniqueName: \"kubernetes.io/projected/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-kube-api-access-lhktn\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.222360 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-router-certs\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.222387 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.222406 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-user-template-error\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.222426 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-audit-dir\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.222444 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-service-ca\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.222461 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.222481 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.222514 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.222531 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-user-template-login\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.222550 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-audit-policies\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.222568 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-session\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.222995 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-audit-dir\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.223183 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.223505 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-service-ca\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.223879 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-audit-policies\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.224155 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.228907 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-router-certs\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.229038 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.229258 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.229568 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.230314 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-session\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.231642 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.236652 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-user-template-login\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.240276 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhktn\" (UniqueName: \"kubernetes.io/projected/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-kube-api-access-lhktn\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.250384 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/074b7b82-0d10-48bd-bc03-9cdcd1ee56c4-v4-0-config-user-template-error\") pod \"oauth-openshift-96fb489c7-lxm25\" (UID: \"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4\") " pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.367052 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.386351 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.507477 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.517448 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.594462 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-96fb489c7-lxm25"] Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.628541 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.737647 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.751536 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" event={"ID":"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4","Type":"ContainerStarted","Data":"e4d1ad0a092d3bd2a2d2c71020cc89bee950793de5f12081f3b5676bc1b2be94"} Feb 02 11:01:52 crc kubenswrapper[4925]: I0202 11:01:52.880424 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.030571 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.158471 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.329630 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.381657 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.402566 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.520385 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.520756 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.677851 4925 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.680798 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.741440 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.758645 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" event={"ID":"074b7b82-0d10-48bd-bc03-9cdcd1ee56c4","Type":"ContainerStarted","Data":"1585833916f39420a28b23ff8f4326db7547abf409ba4a4805a40c94b24c2cc5"} Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.758971 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.766228 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.784843 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-96fb489c7-lxm25" podStartSLOduration=51.78481061 podStartE2EDuration="51.78481061s" podCreationTimestamp="2026-02-02 11:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:01:53.780570116 +0000 UTC m=+290.784819138" watchObservedRunningTime="2026-02-02 11:01:53.78481061 +0000 UTC m=+290.789059602" Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.869903 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.871211 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.982265 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 11:01:53 crc kubenswrapper[4925]: I0202 11:01:53.984835 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 11:01:54 crc kubenswrapper[4925]: I0202 11:01:54.116794 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 11:01:54 crc kubenswrapper[4925]: I0202 11:01:54.321014 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 11:01:54 crc kubenswrapper[4925]: I0202 11:01:54.331482 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 11:01:54 crc kubenswrapper[4925]: I0202 11:01:54.431881 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 11:01:54 crc kubenswrapper[4925]: I0202 11:01:54.465368 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 11:01:54 crc kubenswrapper[4925]: I0202 11:01:54.608280 4925 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 11:01:54 crc kubenswrapper[4925]: I0202 11:01:54.783321 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 11:01:54 crc kubenswrapper[4925]: I0202 11:01:54.904402 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 11:01:55 crc kubenswrapper[4925]: I0202 11:01:55.006955 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 11:01:55 crc kubenswrapper[4925]: I0202 11:01:55.124709 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 11:01:55 crc kubenswrapper[4925]: I0202 11:01:55.323674 4925 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 11:01:55 crc kubenswrapper[4925]: I0202 11:01:55.409446 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 11:01:55 crc kubenswrapper[4925]: I0202 11:01:55.528445 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 11:01:55 crc kubenswrapper[4925]: I0202 11:01:55.620972 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.559896 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.559986 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.673374 4925 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.680824 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.680903 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.680989 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.681037 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.681009 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.681147 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.681163 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.681240 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.681260 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.681425 4925 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.681437 4925 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.681445 4925 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.681453 4925 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.688479 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.688543 4925 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2a7056c5-4305-4a3e-a486-d853c40af351" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.692276 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.695354 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.695409 4925 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2a7056c5-4305-4a3e-a486-d853c40af351" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.763840 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.782111 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.782179 4925 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="090e6ab0e2afa863f7c1331081845cf9357fe1c00933303cef1a5000b6221446" exitCode=137 Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.782239 4925 scope.go:117] "RemoveContainer" containerID="090e6ab0e2afa863f7c1331081845cf9357fe1c00933303cef1a5000b6221446" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.782469 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.784881 4925 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.812329 4925 scope.go:117] "RemoveContainer" containerID="090e6ab0e2afa863f7c1331081845cf9357fe1c00933303cef1a5000b6221446" Feb 02 11:01:56 crc kubenswrapper[4925]: E0202 11:01:56.812794 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"090e6ab0e2afa863f7c1331081845cf9357fe1c00933303cef1a5000b6221446\": container with ID starting with 090e6ab0e2afa863f7c1331081845cf9357fe1c00933303cef1a5000b6221446 not found: ID does not exist" containerID="090e6ab0e2afa863f7c1331081845cf9357fe1c00933303cef1a5000b6221446" Feb 02 11:01:56 crc kubenswrapper[4925]: I0202 11:01:56.812827 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"090e6ab0e2afa863f7c1331081845cf9357fe1c00933303cef1a5000b6221446"} err="failed to get container status \"090e6ab0e2afa863f7c1331081845cf9357fe1c00933303cef1a5000b6221446\": rpc error: code = NotFound desc = could not find container \"090e6ab0e2afa863f7c1331081845cf9357fe1c00933303cef1a5000b6221446\": container with ID starting with 090e6ab0e2afa863f7c1331081845cf9357fe1c00933303cef1a5000b6221446 not found: ID does not exist" Feb 02 11:01:57 crc kubenswrapper[4925]: I0202 11:01:57.072686 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 11:01:58 crc kubenswrapper[4925]: I0202 11:01:58.674252 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 02 11:02:04 crc kubenswrapper[4925]: I0202 11:02:04.156654 4925 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 02 11:02:08 crc kubenswrapper[4925]: I0202 11:02:08.846601 4925 generic.go:334] "Generic (PLEG): container finished" podID="c8e6ecfa-3855-4fee-890a-2a88f84dc8a4" containerID="e1fbc146738b64aa9bb6292522b265aeb87055a14d0a10b63b07be753af3cd5a" exitCode=0 Feb 02 11:02:08 crc kubenswrapper[4925]: I0202 11:02:08.846674 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" event={"ID":"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4","Type":"ContainerDied","Data":"e1fbc146738b64aa9bb6292522b265aeb87055a14d0a10b63b07be753af3cd5a"} Feb 02 11:02:08 crc kubenswrapper[4925]: I0202 11:02:08.847546 4925 scope.go:117] "RemoveContainer" containerID="e1fbc146738b64aa9bb6292522b265aeb87055a14d0a10b63b07be753af3cd5a" Feb 02 11:02:09 crc kubenswrapper[4925]: I0202 11:02:09.853575 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" event={"ID":"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4","Type":"ContainerStarted","Data":"edbdfd92eab9ae719ac07ecfa2fec52db673f349de35e1d9801518852f6d3afa"} Feb 02 11:02:09 crc kubenswrapper[4925]: I0202 11:02:09.854303 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" Feb 02 11:02:09 crc kubenswrapper[4925]: I0202 11:02:09.855203 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" Feb 02 11:02:11 crc kubenswrapper[4925]: I0202 11:02:11.721714 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c4c76fdff-9268j"] Feb 02 11:02:11 crc kubenswrapper[4925]: I0202 11:02:11.721945 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" podUID="e0ab306c-9181-4e57-8224-2841c8d2effe" containerName="controller-manager" containerID="cri-o://342838bccbed115e54fdc7dbdfb5d95cb32cb4d84a555843f3bbe4417b736c41" gracePeriod=30 Feb 02 11:02:11 crc kubenswrapper[4925]: I0202 11:02:11.729813 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws"] Feb 02 11:02:11 crc kubenswrapper[4925]: I0202 11:02:11.730031 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" podUID="3d5ee896-9246-434b-a043-ab677266af4e" containerName="route-controller-manager" containerID="cri-o://9549ca1bc1c63d00fd20fa74640609c1369419ce90f3bf2c586dcc5b50aeb41f" gracePeriod=30 Feb 02 11:02:11 crc kubenswrapper[4925]: I0202 11:02:11.864307 4925 generic.go:334] "Generic (PLEG): container finished" podID="e0ab306c-9181-4e57-8224-2841c8d2effe" containerID="342838bccbed115e54fdc7dbdfb5d95cb32cb4d84a555843f3bbe4417b736c41" exitCode=0 Feb 02 11:02:11 crc kubenswrapper[4925]: I0202 11:02:11.864384 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" event={"ID":"e0ab306c-9181-4e57-8224-2841c8d2effe","Type":"ContainerDied","Data":"342838bccbed115e54fdc7dbdfb5d95cb32cb4d84a555843f3bbe4417b736c41"} Feb 02 11:02:11 crc kubenswrapper[4925]: I0202 11:02:11.865797 4925 generic.go:334] "Generic (PLEG): container finished" podID="3d5ee896-9246-434b-a043-ab677266af4e" containerID="9549ca1bc1c63d00fd20fa74640609c1369419ce90f3bf2c586dcc5b50aeb41f" exitCode=0 Feb 02 11:02:11 crc kubenswrapper[4925]: I0202 11:02:11.866142 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" event={"ID":"3d5ee896-9246-434b-a043-ab677266af4e","Type":"ContainerDied","Data":"9549ca1bc1c63d00fd20fa74640609c1369419ce90f3bf2c586dcc5b50aeb41f"} Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.114496 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.158955 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.189378 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d5ee896-9246-434b-a043-ab677266af4e-config\") pod \"3d5ee896-9246-434b-a043-ab677266af4e\" (UID: \"3d5ee896-9246-434b-a043-ab677266af4e\") " Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.189476 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcskc\" (UniqueName: \"kubernetes.io/projected/3d5ee896-9246-434b-a043-ab677266af4e-kube-api-access-kcskc\") pod \"3d5ee896-9246-434b-a043-ab677266af4e\" (UID: \"3d5ee896-9246-434b-a043-ab677266af4e\") " Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.189548 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d5ee896-9246-434b-a043-ab677266af4e-client-ca\") pod \"3d5ee896-9246-434b-a043-ab677266af4e\" (UID: \"3d5ee896-9246-434b-a043-ab677266af4e\") " Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.189573 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d5ee896-9246-434b-a043-ab677266af4e-serving-cert\") pod \"3d5ee896-9246-434b-a043-ab677266af4e\" (UID: \"3d5ee896-9246-434b-a043-ab677266af4e\") " Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.190343 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d5ee896-9246-434b-a043-ab677266af4e-client-ca" (OuterVolumeSpecName: "client-ca") pod "3d5ee896-9246-434b-a043-ab677266af4e" (UID: "3d5ee896-9246-434b-a043-ab677266af4e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.190745 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d5ee896-9246-434b-a043-ab677266af4e-config" (OuterVolumeSpecName: "config") pod "3d5ee896-9246-434b-a043-ab677266af4e" (UID: "3d5ee896-9246-434b-a043-ab677266af4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.194959 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d5ee896-9246-434b-a043-ab677266af4e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3d5ee896-9246-434b-a043-ab677266af4e" (UID: "3d5ee896-9246-434b-a043-ab677266af4e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.195190 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d5ee896-9246-434b-a043-ab677266af4e-kube-api-access-kcskc" (OuterVolumeSpecName: "kube-api-access-kcskc") pod "3d5ee896-9246-434b-a043-ab677266af4e" (UID: "3d5ee896-9246-434b-a043-ab677266af4e"). InnerVolumeSpecName "kube-api-access-kcskc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.291156 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdt7x\" (UniqueName: \"kubernetes.io/projected/e0ab306c-9181-4e57-8224-2841c8d2effe-kube-api-access-qdt7x\") pod \"e0ab306c-9181-4e57-8224-2841c8d2effe\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.291254 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-config\") pod \"e0ab306c-9181-4e57-8224-2841c8d2effe\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.291336 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-client-ca\") pod \"e0ab306c-9181-4e57-8224-2841c8d2effe\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.291422 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-proxy-ca-bundles\") pod \"e0ab306c-9181-4e57-8224-2841c8d2effe\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.291457 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0ab306c-9181-4e57-8224-2841c8d2effe-serving-cert\") pod \"e0ab306c-9181-4e57-8224-2841c8d2effe\" (UID: \"e0ab306c-9181-4e57-8224-2841c8d2effe\") " Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.291783 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcskc\" (UniqueName: \"kubernetes.io/projected/3d5ee896-9246-434b-a043-ab677266af4e-kube-api-access-kcskc\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.291813 4925 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d5ee896-9246-434b-a043-ab677266af4e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.291834 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d5ee896-9246-434b-a043-ab677266af4e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.291851 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d5ee896-9246-434b-a043-ab677266af4e-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.292243 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-client-ca" (OuterVolumeSpecName: "client-ca") pod "e0ab306c-9181-4e57-8224-2841c8d2effe" (UID: "e0ab306c-9181-4e57-8224-2841c8d2effe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.292298 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e0ab306c-9181-4e57-8224-2841c8d2effe" (UID: "e0ab306c-9181-4e57-8224-2841c8d2effe"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.292381 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-config" (OuterVolumeSpecName: "config") pod "e0ab306c-9181-4e57-8224-2841c8d2effe" (UID: "e0ab306c-9181-4e57-8224-2841c8d2effe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.295180 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ab306c-9181-4e57-8224-2841c8d2effe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e0ab306c-9181-4e57-8224-2841c8d2effe" (UID: "e0ab306c-9181-4e57-8224-2841c8d2effe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.295293 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ab306c-9181-4e57-8224-2841c8d2effe-kube-api-access-qdt7x" (OuterVolumeSpecName: "kube-api-access-qdt7x") pod "e0ab306c-9181-4e57-8224-2841c8d2effe" (UID: "e0ab306c-9181-4e57-8224-2841c8d2effe"). InnerVolumeSpecName "kube-api-access-qdt7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.393544 4925 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.393635 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0ab306c-9181-4e57-8224-2841c8d2effe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.393697 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdt7x\" (UniqueName: \"kubernetes.io/projected/e0ab306c-9181-4e57-8224-2841c8d2effe-kube-api-access-qdt7x\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.393725 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.393746 4925 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0ab306c-9181-4e57-8224-2841c8d2effe-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.873834 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" event={"ID":"e0ab306c-9181-4e57-8224-2841c8d2effe","Type":"ContainerDied","Data":"9a7036938e419f0fc300668ae1a4610b7624fd4ab7370f7aaffc03c2415f550e"} Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.873881 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c4c76fdff-9268j" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.873895 4925 scope.go:117] "RemoveContainer" containerID="342838bccbed115e54fdc7dbdfb5d95cb32cb4d84a555843f3bbe4417b736c41" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.876682 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" event={"ID":"3d5ee896-9246-434b-a043-ab677266af4e","Type":"ContainerDied","Data":"63cd7db02f7563fe335e857edade3e922e9a0aa0ed2ce351d10906182abcb825"} Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.876713 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.892191 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c4c76fdff-9268j"] Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.906313 4925 scope.go:117] "RemoveContainer" containerID="9549ca1bc1c63d00fd20fa74640609c1369419ce90f3bf2c586dcc5b50aeb41f" Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.913840 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c4c76fdff-9268j"] Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.917508 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws"] Feb 02 11:02:12 crc kubenswrapper[4925]: I0202 11:02:12.921427 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59df95688f-pxvws"] Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.322038 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68"] Feb 02 11:02:13 crc kubenswrapper[4925]: E0202 11:02:13.323943 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d5ee896-9246-434b-a043-ab677266af4e" containerName="route-controller-manager" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.323985 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d5ee896-9246-434b-a043-ab677266af4e" containerName="route-controller-manager" Feb 02 11:02:13 crc kubenswrapper[4925]: E0202 11:02:13.324045 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ab306c-9181-4e57-8224-2841c8d2effe" containerName="controller-manager" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.324067 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ab306c-9181-4e57-8224-2841c8d2effe" containerName="controller-manager" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.325636 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ab306c-9181-4e57-8224-2841c8d2effe" containerName="controller-manager" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.325707 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d5ee896-9246-434b-a043-ab677266af4e" containerName="route-controller-manager" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.326625 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.327268 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-674c79b877-t9ggm"] Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.328053 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.328493 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.328620 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.330097 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.331091 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.331543 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.331867 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.332011 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.332190 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.332354 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.332517 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.333236 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-674c79b877-t9ggm"] Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.333630 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.333637 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.336647 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68"] Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.342976 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.429980 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28ab8f20-fe79-4627-8081-84402d62b3ab-serving-cert\") pod \"route-controller-manager-7bbd4f7bdc-sjv68\" (UID: \"28ab8f20-fe79-4627-8081-84402d62b3ab\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.430035 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28ab8f20-fe79-4627-8081-84402d62b3ab-client-ca\") pod \"route-controller-manager-7bbd4f7bdc-sjv68\" (UID: \"28ab8f20-fe79-4627-8081-84402d62b3ab\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.430055 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd399eb-ded4-497c-8e86-d2bf01359a1a-serving-cert\") pod \"controller-manager-674c79b877-t9ggm\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.430111 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bqsz\" (UniqueName: \"kubernetes.io/projected/6fd399eb-ded4-497c-8e86-d2bf01359a1a-kube-api-access-6bqsz\") pod \"controller-manager-674c79b877-t9ggm\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.430287 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-client-ca\") pod \"controller-manager-674c79b877-t9ggm\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.430403 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-config\") pod \"controller-manager-674c79b877-t9ggm\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.430448 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ab8f20-fe79-4627-8081-84402d62b3ab-config\") pod \"route-controller-manager-7bbd4f7bdc-sjv68\" (UID: \"28ab8f20-fe79-4627-8081-84402d62b3ab\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.430473 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrjzm\" (UniqueName: \"kubernetes.io/projected/28ab8f20-fe79-4627-8081-84402d62b3ab-kube-api-access-hrjzm\") pod \"route-controller-manager-7bbd4f7bdc-sjv68\" (UID: \"28ab8f20-fe79-4627-8081-84402d62b3ab\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.430521 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-proxy-ca-bundles\") pod \"controller-manager-674c79b877-t9ggm\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.531478 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-client-ca\") pod \"controller-manager-674c79b877-t9ggm\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.531530 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-config\") pod \"controller-manager-674c79b877-t9ggm\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.531549 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ab8f20-fe79-4627-8081-84402d62b3ab-config\") pod \"route-controller-manager-7bbd4f7bdc-sjv68\" (UID: \"28ab8f20-fe79-4627-8081-84402d62b3ab\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.531568 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrjzm\" (UniqueName: \"kubernetes.io/projected/28ab8f20-fe79-4627-8081-84402d62b3ab-kube-api-access-hrjzm\") pod \"route-controller-manager-7bbd4f7bdc-sjv68\" (UID: \"28ab8f20-fe79-4627-8081-84402d62b3ab\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.531597 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-proxy-ca-bundles\") pod \"controller-manager-674c79b877-t9ggm\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.531621 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28ab8f20-fe79-4627-8081-84402d62b3ab-serving-cert\") pod \"route-controller-manager-7bbd4f7bdc-sjv68\" (UID: \"28ab8f20-fe79-4627-8081-84402d62b3ab\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.531639 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28ab8f20-fe79-4627-8081-84402d62b3ab-client-ca\") pod \"route-controller-manager-7bbd4f7bdc-sjv68\" (UID: \"28ab8f20-fe79-4627-8081-84402d62b3ab\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.531657 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd399eb-ded4-497c-8e86-d2bf01359a1a-serving-cert\") pod \"controller-manager-674c79b877-t9ggm\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.531680 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bqsz\" (UniqueName: \"kubernetes.io/projected/6fd399eb-ded4-497c-8e86-d2bf01359a1a-kube-api-access-6bqsz\") pod \"controller-manager-674c79b877-t9ggm\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.533030 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28ab8f20-fe79-4627-8081-84402d62b3ab-client-ca\") pod \"route-controller-manager-7bbd4f7bdc-sjv68\" (UID: \"28ab8f20-fe79-4627-8081-84402d62b3ab\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.533234 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-client-ca\") pod \"controller-manager-674c79b877-t9ggm\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.533324 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ab8f20-fe79-4627-8081-84402d62b3ab-config\") pod \"route-controller-manager-7bbd4f7bdc-sjv68\" (UID: \"28ab8f20-fe79-4627-8081-84402d62b3ab\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.533797 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-proxy-ca-bundles\") pod \"controller-manager-674c79b877-t9ggm\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.534622 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-config\") pod \"controller-manager-674c79b877-t9ggm\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.539777 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd399eb-ded4-497c-8e86-d2bf01359a1a-serving-cert\") pod \"controller-manager-674c79b877-t9ggm\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.543035 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28ab8f20-fe79-4627-8081-84402d62b3ab-serving-cert\") pod \"route-controller-manager-7bbd4f7bdc-sjv68\" (UID: \"28ab8f20-fe79-4627-8081-84402d62b3ab\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.549520 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrjzm\" (UniqueName: \"kubernetes.io/projected/28ab8f20-fe79-4627-8081-84402d62b3ab-kube-api-access-hrjzm\") pod \"route-controller-manager-7bbd4f7bdc-sjv68\" (UID: \"28ab8f20-fe79-4627-8081-84402d62b3ab\") " pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.549527 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bqsz\" (UniqueName: \"kubernetes.io/projected/6fd399eb-ded4-497c-8e86-d2bf01359a1a-kube-api-access-6bqsz\") pod \"controller-manager-674c79b877-t9ggm\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.644849 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.653762 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.868922 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-674c79b877-t9ggm"] Feb 02 11:02:13 crc kubenswrapper[4925]: I0202 11:02:13.931126 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68"] Feb 02 11:02:14 crc kubenswrapper[4925]: I0202 11:02:14.671427 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d5ee896-9246-434b-a043-ab677266af4e" path="/var/lib/kubelet/pods/3d5ee896-9246-434b-a043-ab677266af4e/volumes" Feb 02 11:02:14 crc kubenswrapper[4925]: I0202 11:02:14.672306 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0ab306c-9181-4e57-8224-2841c8d2effe" path="/var/lib/kubelet/pods/e0ab306c-9181-4e57-8224-2841c8d2effe/volumes" Feb 02 11:02:14 crc kubenswrapper[4925]: I0202 11:02:14.894604 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" event={"ID":"28ab8f20-fe79-4627-8081-84402d62b3ab","Type":"ContainerStarted","Data":"a791325e0519742082f85f9c6d8e7a4d5979b3ed69b5f9e08e157bdf72afc3e9"} Feb 02 11:02:14 crc kubenswrapper[4925]: I0202 11:02:14.895103 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:14 crc kubenswrapper[4925]: I0202 11:02:14.895139 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" event={"ID":"28ab8f20-fe79-4627-8081-84402d62b3ab","Type":"ContainerStarted","Data":"847ac1b3627b47e5da155ad7fb6ad671937f8266800d508c7c9409d9a7cfdc94"} Feb 02 11:02:14 crc kubenswrapper[4925]: I0202 11:02:14.896011 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" event={"ID":"6fd399eb-ded4-497c-8e86-d2bf01359a1a","Type":"ContainerStarted","Data":"61b58ccdfa7c0c959df076edbebc40e14e26d99ed78c321798a54991e3b9ac25"} Feb 02 11:02:14 crc kubenswrapper[4925]: I0202 11:02:14.896047 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" event={"ID":"6fd399eb-ded4-497c-8e86-d2bf01359a1a","Type":"ContainerStarted","Data":"44c37aba96f937ebca1e0811a46a4f62b4cd53f6677942dd925e0ddb7b5ff8bd"} Feb 02 11:02:14 crc kubenswrapper[4925]: I0202 11:02:14.896389 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:14 crc kubenswrapper[4925]: I0202 11:02:14.901695 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:14 crc kubenswrapper[4925]: I0202 11:02:14.903824 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:14 crc kubenswrapper[4925]: I0202 11:02:14.914956 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" podStartSLOduration=3.914939305 podStartE2EDuration="3.914939305s" podCreationTimestamp="2026-02-02 11:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:02:14.914532704 +0000 UTC m=+311.918781706" watchObservedRunningTime="2026-02-02 11:02:14.914939305 +0000 UTC m=+311.919188267" Feb 02 11:02:14 crc kubenswrapper[4925]: I0202 11:02:14.998550 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" podStartSLOduration=3.998519446 podStartE2EDuration="3.998519446s" podCreationTimestamp="2026-02-02 11:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:02:14.996489721 +0000 UTC m=+312.000738733" watchObservedRunningTime="2026-02-02 11:02:14.998519446 +0000 UTC m=+312.002768448" Feb 02 11:02:16 crc kubenswrapper[4925]: I0202 11:02:16.910355 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 11:02:16 crc kubenswrapper[4925]: I0202 11:02:16.912658 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 11:02:16 crc kubenswrapper[4925]: I0202 11:02:16.912703 4925 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e8c524473f10523b57b6bbf18a21b7b2b38bb9655e879a0b1cbb53d21a19474b" exitCode=137 Feb 02 11:02:16 crc kubenswrapper[4925]: I0202 11:02:16.913137 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e8c524473f10523b57b6bbf18a21b7b2b38bb9655e879a0b1cbb53d21a19474b"} Feb 02 11:02:16 crc kubenswrapper[4925]: I0202 11:02:16.913211 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"668009e15bfc2a43cb0dfed560858b13723f0f1110925a40231f5aa66cf9931f"} Feb 02 11:02:16 crc kubenswrapper[4925]: I0202 11:02:16.913242 4925 scope.go:117] "RemoveContainer" containerID="2bb8025f66b89077c858562effce1877fa680505058616988508db2e93b021d7" Feb 02 11:02:17 crc kubenswrapper[4925]: I0202 11:02:17.923053 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 11:02:24 crc kubenswrapper[4925]: I0202 11:02:24.780159 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 11:02:25 crc kubenswrapper[4925]: I0202 11:02:25.808850 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 11:02:25 crc kubenswrapper[4925]: I0202 11:02:25.812542 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 11:02:34 crc kubenswrapper[4925]: I0202 11:02:34.782633 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 11:02:36 crc kubenswrapper[4925]: I0202 11:02:36.861694 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-674c79b877-t9ggm"] Feb 02 11:02:36 crc kubenswrapper[4925]: I0202 11:02:36.862181 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" podUID="6fd399eb-ded4-497c-8e86-d2bf01359a1a" containerName="controller-manager" containerID="cri-o://61b58ccdfa7c0c959df076edbebc40e14e26d99ed78c321798a54991e3b9ac25" gracePeriod=30 Feb 02 11:02:36 crc kubenswrapper[4925]: I0202 11:02:36.868527 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68"] Feb 02 11:02:36 crc kubenswrapper[4925]: I0202 11:02:36.868770 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" podUID="28ab8f20-fe79-4627-8081-84402d62b3ab" containerName="route-controller-manager" containerID="cri-o://a791325e0519742082f85f9c6d8e7a4d5979b3ed69b5f9e08e157bdf72afc3e9" gracePeriod=30 Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.031361 4925 generic.go:334] "Generic (PLEG): container finished" podID="28ab8f20-fe79-4627-8081-84402d62b3ab" containerID="a791325e0519742082f85f9c6d8e7a4d5979b3ed69b5f9e08e157bdf72afc3e9" exitCode=0 Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.031446 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" event={"ID":"28ab8f20-fe79-4627-8081-84402d62b3ab","Type":"ContainerDied","Data":"a791325e0519742082f85f9c6d8e7a4d5979b3ed69b5f9e08e157bdf72afc3e9"} Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.032999 4925 generic.go:334] "Generic (PLEG): container finished" podID="6fd399eb-ded4-497c-8e86-d2bf01359a1a" containerID="61b58ccdfa7c0c959df076edbebc40e14e26d99ed78c321798a54991e3b9ac25" exitCode=0 Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.033026 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" event={"ID":"6fd399eb-ded4-497c-8e86-d2bf01359a1a","Type":"ContainerDied","Data":"61b58ccdfa7c0c959df076edbebc40e14e26d99ed78c321798a54991e3b9ac25"} Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.366651 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.438320 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrjzm\" (UniqueName: \"kubernetes.io/projected/28ab8f20-fe79-4627-8081-84402d62b3ab-kube-api-access-hrjzm\") pod \"28ab8f20-fe79-4627-8081-84402d62b3ab\" (UID: \"28ab8f20-fe79-4627-8081-84402d62b3ab\") " Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.438372 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28ab8f20-fe79-4627-8081-84402d62b3ab-client-ca\") pod \"28ab8f20-fe79-4627-8081-84402d62b3ab\" (UID: \"28ab8f20-fe79-4627-8081-84402d62b3ab\") " Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.438392 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28ab8f20-fe79-4627-8081-84402d62b3ab-serving-cert\") pod \"28ab8f20-fe79-4627-8081-84402d62b3ab\" (UID: \"28ab8f20-fe79-4627-8081-84402d62b3ab\") " Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.438417 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ab8f20-fe79-4627-8081-84402d62b3ab-config\") pod \"28ab8f20-fe79-4627-8081-84402d62b3ab\" (UID: \"28ab8f20-fe79-4627-8081-84402d62b3ab\") " Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.439204 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ab8f20-fe79-4627-8081-84402d62b3ab-client-ca" (OuterVolumeSpecName: "client-ca") pod "28ab8f20-fe79-4627-8081-84402d62b3ab" (UID: "28ab8f20-fe79-4627-8081-84402d62b3ab"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.439252 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ab8f20-fe79-4627-8081-84402d62b3ab-config" (OuterVolumeSpecName: "config") pod "28ab8f20-fe79-4627-8081-84402d62b3ab" (UID: "28ab8f20-fe79-4627-8081-84402d62b3ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.444763 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ab8f20-fe79-4627-8081-84402d62b3ab-kube-api-access-hrjzm" (OuterVolumeSpecName: "kube-api-access-hrjzm") pod "28ab8f20-fe79-4627-8081-84402d62b3ab" (UID: "28ab8f20-fe79-4627-8081-84402d62b3ab"). InnerVolumeSpecName "kube-api-access-hrjzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.446182 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ab8f20-fe79-4627-8081-84402d62b3ab-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "28ab8f20-fe79-4627-8081-84402d62b3ab" (UID: "28ab8f20-fe79-4627-8081-84402d62b3ab"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.483513 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.540185 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrjzm\" (UniqueName: \"kubernetes.io/projected/28ab8f20-fe79-4627-8081-84402d62b3ab-kube-api-access-hrjzm\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.540219 4925 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28ab8f20-fe79-4627-8081-84402d62b3ab-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.540229 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28ab8f20-fe79-4627-8081-84402d62b3ab-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.540237 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ab8f20-fe79-4627-8081-84402d62b3ab-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.636025 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tp28w"] Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.636312 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tp28w" podUID="c1044ab1-2d86-4f71-995a-5994d6b2262e" containerName="registry-server" containerID="cri-o://840aeb75c267e7c780488105b93e3092e9815c9681182b3fb8b29d6776bf3f43" gracePeriod=2 Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.640960 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-config\") pod \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.641105 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bqsz\" (UniqueName: \"kubernetes.io/projected/6fd399eb-ded4-497c-8e86-d2bf01359a1a-kube-api-access-6bqsz\") pod \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.641177 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd399eb-ded4-497c-8e86-d2bf01359a1a-serving-cert\") pod \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.641204 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-client-ca\") pod \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.641235 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-proxy-ca-bundles\") pod \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\" (UID: \"6fd399eb-ded4-497c-8e86-d2bf01359a1a\") " Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.641935 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-client-ca" (OuterVolumeSpecName: "client-ca") pod "6fd399eb-ded4-497c-8e86-d2bf01359a1a" (UID: "6fd399eb-ded4-497c-8e86-d2bf01359a1a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.642021 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6fd399eb-ded4-497c-8e86-d2bf01359a1a" (UID: "6fd399eb-ded4-497c-8e86-d2bf01359a1a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.642407 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-config" (OuterVolumeSpecName: "config") pod "6fd399eb-ded4-497c-8e86-d2bf01359a1a" (UID: "6fd399eb-ded4-497c-8e86-d2bf01359a1a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.644888 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd399eb-ded4-497c-8e86-d2bf01359a1a-kube-api-access-6bqsz" (OuterVolumeSpecName: "kube-api-access-6bqsz") pod "6fd399eb-ded4-497c-8e86-d2bf01359a1a" (UID: "6fd399eb-ded4-497c-8e86-d2bf01359a1a"). InnerVolumeSpecName "kube-api-access-6bqsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.645207 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd399eb-ded4-497c-8e86-d2bf01359a1a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6fd399eb-ded4-497c-8e86-d2bf01359a1a" (UID: "6fd399eb-ded4-497c-8e86-d2bf01359a1a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.744599 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.744624 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bqsz\" (UniqueName: \"kubernetes.io/projected/6fd399eb-ded4-497c-8e86-d2bf01359a1a-kube-api-access-6bqsz\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.744635 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd399eb-ded4-497c-8e86-d2bf01359a1a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.744644 4925 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:37 crc kubenswrapper[4925]: I0202 11:02:37.744652 4925 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6fd399eb-ded4-497c-8e86-d2bf01359a1a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.039562 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp28w" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.040057 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.040421 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68" event={"ID":"28ab8f20-fe79-4627-8081-84402d62b3ab","Type":"ContainerDied","Data":"847ac1b3627b47e5da155ad7fb6ad671937f8266800d508c7c9409d9a7cfdc94"} Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.040463 4925 scope.go:117] "RemoveContainer" containerID="a791325e0519742082f85f9c6d8e7a4d5979b3ed69b5f9e08e157bdf72afc3e9" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.043036 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.043257 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-674c79b877-t9ggm" event={"ID":"6fd399eb-ded4-497c-8e86-d2bf01359a1a","Type":"ContainerDied","Data":"44c37aba96f937ebca1e0811a46a4f62b4cd53f6677942dd925e0ddb7b5ff8bd"} Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.045600 4925 generic.go:334] "Generic (PLEG): container finished" podID="c1044ab1-2d86-4f71-995a-5994d6b2262e" containerID="840aeb75c267e7c780488105b93e3092e9815c9681182b3fb8b29d6776bf3f43" exitCode=0 Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.045624 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp28w" event={"ID":"c1044ab1-2d86-4f71-995a-5994d6b2262e","Type":"ContainerDied","Data":"840aeb75c267e7c780488105b93e3092e9815c9681182b3fb8b29d6776bf3f43"} Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.045638 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp28w" event={"ID":"c1044ab1-2d86-4f71-995a-5994d6b2262e","Type":"ContainerDied","Data":"aaa863cc6846ba8861a7f10a069f2fa9b6848c5a246c4cbcfded55a88858a040"} Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.045708 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp28w" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.059673 4925 scope.go:117] "RemoveContainer" containerID="61b58ccdfa7c0c959df076edbebc40e14e26d99ed78c321798a54991e3b9ac25" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.072385 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68"] Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.078727 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bbd4f7bdc-sjv68"] Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.093188 4925 scope.go:117] "RemoveContainer" containerID="840aeb75c267e7c780488105b93e3092e9815c9681182b3fb8b29d6776bf3f43" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.093537 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-674c79b877-t9ggm"] Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.098864 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-674c79b877-t9ggm"] Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.107964 4925 scope.go:117] "RemoveContainer" containerID="a6e75bff94bb589ccae37c280b06ccfc119dc2bf790d196adcb027f92b384e08" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.125722 4925 scope.go:117] "RemoveContainer" containerID="b87ec82220311a126e79f364a4d3b3a7faaf36968c0601a4e8e1a56732438e46" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.142367 4925 scope.go:117] "RemoveContainer" containerID="840aeb75c267e7c780488105b93e3092e9815c9681182b3fb8b29d6776bf3f43" Feb 02 11:02:38 crc kubenswrapper[4925]: E0202 11:02:38.142868 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"840aeb75c267e7c780488105b93e3092e9815c9681182b3fb8b29d6776bf3f43\": container with ID starting with 840aeb75c267e7c780488105b93e3092e9815c9681182b3fb8b29d6776bf3f43 not found: ID does not exist" containerID="840aeb75c267e7c780488105b93e3092e9815c9681182b3fb8b29d6776bf3f43" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.142924 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"840aeb75c267e7c780488105b93e3092e9815c9681182b3fb8b29d6776bf3f43"} err="failed to get container status \"840aeb75c267e7c780488105b93e3092e9815c9681182b3fb8b29d6776bf3f43\": rpc error: code = NotFound desc = could not find container \"840aeb75c267e7c780488105b93e3092e9815c9681182b3fb8b29d6776bf3f43\": container with ID starting with 840aeb75c267e7c780488105b93e3092e9815c9681182b3fb8b29d6776bf3f43 not found: ID does not exist" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.142948 4925 scope.go:117] "RemoveContainer" containerID="a6e75bff94bb589ccae37c280b06ccfc119dc2bf790d196adcb027f92b384e08" Feb 02 11:02:38 crc kubenswrapper[4925]: E0202 11:02:38.143202 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6e75bff94bb589ccae37c280b06ccfc119dc2bf790d196adcb027f92b384e08\": container with ID starting with a6e75bff94bb589ccae37c280b06ccfc119dc2bf790d196adcb027f92b384e08 not found: ID does not exist" containerID="a6e75bff94bb589ccae37c280b06ccfc119dc2bf790d196adcb027f92b384e08" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.143221 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6e75bff94bb589ccae37c280b06ccfc119dc2bf790d196adcb027f92b384e08"} err="failed to get container status \"a6e75bff94bb589ccae37c280b06ccfc119dc2bf790d196adcb027f92b384e08\": rpc error: code = NotFound desc = could not find container \"a6e75bff94bb589ccae37c280b06ccfc119dc2bf790d196adcb027f92b384e08\": container with ID starting with a6e75bff94bb589ccae37c280b06ccfc119dc2bf790d196adcb027f92b384e08 not found: ID does not exist" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.143234 4925 scope.go:117] "RemoveContainer" containerID="b87ec82220311a126e79f364a4d3b3a7faaf36968c0601a4e8e1a56732438e46" Feb 02 11:02:38 crc kubenswrapper[4925]: E0202 11:02:38.143397 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b87ec82220311a126e79f364a4d3b3a7faaf36968c0601a4e8e1a56732438e46\": container with ID starting with b87ec82220311a126e79f364a4d3b3a7faaf36968c0601a4e8e1a56732438e46 not found: ID does not exist" containerID="b87ec82220311a126e79f364a4d3b3a7faaf36968c0601a4e8e1a56732438e46" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.143419 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b87ec82220311a126e79f364a4d3b3a7faaf36968c0601a4e8e1a56732438e46"} err="failed to get container status \"b87ec82220311a126e79f364a4d3b3a7faaf36968c0601a4e8e1a56732438e46\": rpc error: code = NotFound desc = could not find container \"b87ec82220311a126e79f364a4d3b3a7faaf36968c0601a4e8e1a56732438e46\": container with ID starting with b87ec82220311a126e79f364a4d3b3a7faaf36968c0601a4e8e1a56732438e46 not found: ID does not exist" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.153255 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1044ab1-2d86-4f71-995a-5994d6b2262e-utilities\") pod \"c1044ab1-2d86-4f71-995a-5994d6b2262e\" (UID: \"c1044ab1-2d86-4f71-995a-5994d6b2262e\") " Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.153315 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbkxp\" (UniqueName: \"kubernetes.io/projected/c1044ab1-2d86-4f71-995a-5994d6b2262e-kube-api-access-dbkxp\") pod \"c1044ab1-2d86-4f71-995a-5994d6b2262e\" (UID: \"c1044ab1-2d86-4f71-995a-5994d6b2262e\") " Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.153338 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1044ab1-2d86-4f71-995a-5994d6b2262e-catalog-content\") pod \"c1044ab1-2d86-4f71-995a-5994d6b2262e\" (UID: \"c1044ab1-2d86-4f71-995a-5994d6b2262e\") " Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.154707 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1044ab1-2d86-4f71-995a-5994d6b2262e-utilities" (OuterVolumeSpecName: "utilities") pod "c1044ab1-2d86-4f71-995a-5994d6b2262e" (UID: "c1044ab1-2d86-4f71-995a-5994d6b2262e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.157507 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1044ab1-2d86-4f71-995a-5994d6b2262e-kube-api-access-dbkxp" (OuterVolumeSpecName: "kube-api-access-dbkxp") pod "c1044ab1-2d86-4f71-995a-5994d6b2262e" (UID: "c1044ab1-2d86-4f71-995a-5994d6b2262e"). InnerVolumeSpecName "kube-api-access-dbkxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.254653 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1044ab1-2d86-4f71-995a-5994d6b2262e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.254943 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbkxp\" (UniqueName: \"kubernetes.io/projected/c1044ab1-2d86-4f71-995a-5994d6b2262e-kube-api-access-dbkxp\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.328748 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1044ab1-2d86-4f71-995a-5994d6b2262e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1044ab1-2d86-4f71-995a-5994d6b2262e" (UID: "c1044ab1-2d86-4f71-995a-5994d6b2262e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.333800 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76fbbf68d8-zt92b"] Feb 02 11:02:38 crc kubenswrapper[4925]: E0202 11:02:38.334043 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd399eb-ded4-497c-8e86-d2bf01359a1a" containerName="controller-manager" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.334058 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd399eb-ded4-497c-8e86-d2bf01359a1a" containerName="controller-manager" Feb 02 11:02:38 crc kubenswrapper[4925]: E0202 11:02:38.334089 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1044ab1-2d86-4f71-995a-5994d6b2262e" containerName="extract-content" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.334096 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1044ab1-2d86-4f71-995a-5994d6b2262e" containerName="extract-content" Feb 02 11:02:38 crc kubenswrapper[4925]: E0202 11:02:38.334109 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1044ab1-2d86-4f71-995a-5994d6b2262e" containerName="extract-utilities" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.334116 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1044ab1-2d86-4f71-995a-5994d6b2262e" containerName="extract-utilities" Feb 02 11:02:38 crc kubenswrapper[4925]: E0202 11:02:38.334124 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ab8f20-fe79-4627-8081-84402d62b3ab" containerName="route-controller-manager" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.334130 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ab8f20-fe79-4627-8081-84402d62b3ab" containerName="route-controller-manager" Feb 02 11:02:38 crc kubenswrapper[4925]: E0202 11:02:38.334137 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1044ab1-2d86-4f71-995a-5994d6b2262e" containerName="registry-server" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.334143 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1044ab1-2d86-4f71-995a-5994d6b2262e" containerName="registry-server" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.334234 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1044ab1-2d86-4f71-995a-5994d6b2262e" containerName="registry-server" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.334247 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd399eb-ded4-497c-8e86-d2bf01359a1a" containerName="controller-manager" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.334255 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ab8f20-fe79-4627-8081-84402d62b3ab" containerName="route-controller-manager" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.334704 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.337187 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w"] Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.338474 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.346565 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.346696 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.346998 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.347115 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.347319 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.348477 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.348539 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.348659 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.348680 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.348778 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.348802 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.348962 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.351760 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w"] Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.354872 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76fbbf68d8-zt92b"] Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.355535 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1044ab1-2d86-4f71-995a-5994d6b2262e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.358900 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.398528 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tp28w"] Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.402530 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tp28w"] Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.456703 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe99d52f-77a5-45fc-b56b-7280d067fb05-config\") pod \"route-controller-manager-7d45d745b-h2s6w\" (UID: \"fe99d52f-77a5-45fc-b56b-7280d067fb05\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.456748 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d812945-4b7a-4215-9319-3eab3ea4f6e6-config\") pod \"controller-manager-76fbbf68d8-zt92b\" (UID: \"4d812945-4b7a-4215-9319-3eab3ea4f6e6\") " pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.456782 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe99d52f-77a5-45fc-b56b-7280d067fb05-serving-cert\") pod \"route-controller-manager-7d45d745b-h2s6w\" (UID: \"fe99d52f-77a5-45fc-b56b-7280d067fb05\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.456951 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe99d52f-77a5-45fc-b56b-7280d067fb05-client-ca\") pod \"route-controller-manager-7d45d745b-h2s6w\" (UID: \"fe99d52f-77a5-45fc-b56b-7280d067fb05\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.457027 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gwwn\" (UniqueName: \"kubernetes.io/projected/4d812945-4b7a-4215-9319-3eab3ea4f6e6-kube-api-access-5gwwn\") pod \"controller-manager-76fbbf68d8-zt92b\" (UID: \"4d812945-4b7a-4215-9319-3eab3ea4f6e6\") " pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.457128 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d812945-4b7a-4215-9319-3eab3ea4f6e6-proxy-ca-bundles\") pod \"controller-manager-76fbbf68d8-zt92b\" (UID: \"4d812945-4b7a-4215-9319-3eab3ea4f6e6\") " pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.457180 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdgxl\" (UniqueName: \"kubernetes.io/projected/fe99d52f-77a5-45fc-b56b-7280d067fb05-kube-api-access-kdgxl\") pod \"route-controller-manager-7d45d745b-h2s6w\" (UID: \"fe99d52f-77a5-45fc-b56b-7280d067fb05\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.457200 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d812945-4b7a-4215-9319-3eab3ea4f6e6-client-ca\") pod \"controller-manager-76fbbf68d8-zt92b\" (UID: \"4d812945-4b7a-4215-9319-3eab3ea4f6e6\") " pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.457228 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d812945-4b7a-4215-9319-3eab3ea4f6e6-serving-cert\") pod \"controller-manager-76fbbf68d8-zt92b\" (UID: \"4d812945-4b7a-4215-9319-3eab3ea4f6e6\") " pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:38 crc kubenswrapper[4925]: E0202 11:02:38.466577 4925 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1044ab1_2d86_4f71_995a_5994d6b2262e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1044ab1_2d86_4f71_995a_5994d6b2262e.slice/crio-aaa863cc6846ba8861a7f10a069f2fa9b6848c5a246c4cbcfded55a88858a040\": RecentStats: unable to find data in memory cache]" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.558378 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gwwn\" (UniqueName: \"kubernetes.io/projected/4d812945-4b7a-4215-9319-3eab3ea4f6e6-kube-api-access-5gwwn\") pod \"controller-manager-76fbbf68d8-zt92b\" (UID: \"4d812945-4b7a-4215-9319-3eab3ea4f6e6\") " pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.558674 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d812945-4b7a-4215-9319-3eab3ea4f6e6-proxy-ca-bundles\") pod \"controller-manager-76fbbf68d8-zt92b\" (UID: \"4d812945-4b7a-4215-9319-3eab3ea4f6e6\") " pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.558844 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdgxl\" (UniqueName: \"kubernetes.io/projected/fe99d52f-77a5-45fc-b56b-7280d067fb05-kube-api-access-kdgxl\") pod \"route-controller-manager-7d45d745b-h2s6w\" (UID: \"fe99d52f-77a5-45fc-b56b-7280d067fb05\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.558967 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d812945-4b7a-4215-9319-3eab3ea4f6e6-client-ca\") pod \"controller-manager-76fbbf68d8-zt92b\" (UID: \"4d812945-4b7a-4215-9319-3eab3ea4f6e6\") " pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.559124 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d812945-4b7a-4215-9319-3eab3ea4f6e6-serving-cert\") pod \"controller-manager-76fbbf68d8-zt92b\" (UID: \"4d812945-4b7a-4215-9319-3eab3ea4f6e6\") " pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.559241 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe99d52f-77a5-45fc-b56b-7280d067fb05-config\") pod \"route-controller-manager-7d45d745b-h2s6w\" (UID: \"fe99d52f-77a5-45fc-b56b-7280d067fb05\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.559350 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d812945-4b7a-4215-9319-3eab3ea4f6e6-config\") pod \"controller-manager-76fbbf68d8-zt92b\" (UID: \"4d812945-4b7a-4215-9319-3eab3ea4f6e6\") " pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.559467 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe99d52f-77a5-45fc-b56b-7280d067fb05-serving-cert\") pod \"route-controller-manager-7d45d745b-h2s6w\" (UID: \"fe99d52f-77a5-45fc-b56b-7280d067fb05\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.559609 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe99d52f-77a5-45fc-b56b-7280d067fb05-client-ca\") pod \"route-controller-manager-7d45d745b-h2s6w\" (UID: \"fe99d52f-77a5-45fc-b56b-7280d067fb05\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.559997 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d812945-4b7a-4215-9319-3eab3ea4f6e6-proxy-ca-bundles\") pod \"controller-manager-76fbbf68d8-zt92b\" (UID: \"4d812945-4b7a-4215-9319-3eab3ea4f6e6\") " pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.560311 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe99d52f-77a5-45fc-b56b-7280d067fb05-client-ca\") pod \"route-controller-manager-7d45d745b-h2s6w\" (UID: \"fe99d52f-77a5-45fc-b56b-7280d067fb05\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.560413 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d812945-4b7a-4215-9319-3eab3ea4f6e6-client-ca\") pod \"controller-manager-76fbbf68d8-zt92b\" (UID: \"4d812945-4b7a-4215-9319-3eab3ea4f6e6\") " pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.560627 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d812945-4b7a-4215-9319-3eab3ea4f6e6-config\") pod \"controller-manager-76fbbf68d8-zt92b\" (UID: \"4d812945-4b7a-4215-9319-3eab3ea4f6e6\") " pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.560995 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe99d52f-77a5-45fc-b56b-7280d067fb05-config\") pod \"route-controller-manager-7d45d745b-h2s6w\" (UID: \"fe99d52f-77a5-45fc-b56b-7280d067fb05\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.563677 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d812945-4b7a-4215-9319-3eab3ea4f6e6-serving-cert\") pod \"controller-manager-76fbbf68d8-zt92b\" (UID: \"4d812945-4b7a-4215-9319-3eab3ea4f6e6\") " pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.563861 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe99d52f-77a5-45fc-b56b-7280d067fb05-serving-cert\") pod \"route-controller-manager-7d45d745b-h2s6w\" (UID: \"fe99d52f-77a5-45fc-b56b-7280d067fb05\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.576371 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdgxl\" (UniqueName: \"kubernetes.io/projected/fe99d52f-77a5-45fc-b56b-7280d067fb05-kube-api-access-kdgxl\") pod \"route-controller-manager-7d45d745b-h2s6w\" (UID: \"fe99d52f-77a5-45fc-b56b-7280d067fb05\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.579041 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gwwn\" (UniqueName: \"kubernetes.io/projected/4d812945-4b7a-4215-9319-3eab3ea4f6e6-kube-api-access-5gwwn\") pod \"controller-manager-76fbbf68d8-zt92b\" (UID: \"4d812945-4b7a-4215-9319-3eab3ea4f6e6\") " pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.581218 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w"] Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.581571 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.655156 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.679684 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ab8f20-fe79-4627-8081-84402d62b3ab" path="/var/lib/kubelet/pods/28ab8f20-fe79-4627-8081-84402d62b3ab/volumes" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.680632 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd399eb-ded4-497c-8e86-d2bf01359a1a" path="/var/lib/kubelet/pods/6fd399eb-ded4-497c-8e86-d2bf01359a1a/volumes" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.681353 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1044ab1-2d86-4f71-995a-5994d6b2262e" path="/var/lib/kubelet/pods/c1044ab1-2d86-4f71-995a-5994d6b2262e/volumes" Feb 02 11:02:38 crc kubenswrapper[4925]: I0202 11:02:38.854544 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76fbbf68d8-zt92b"] Feb 02 11:02:38 crc kubenswrapper[4925]: W0202 11:02:38.870679 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d812945_4b7a_4215_9319_3eab3ea4f6e6.slice/crio-4f57a5b938bb6d59909c3e50907ac1ac08a217caa123f4bad1f4e3b5c43cdd8e WatchSource:0}: Error finding container 4f57a5b938bb6d59909c3e50907ac1ac08a217caa123f4bad1f4e3b5c43cdd8e: Status 404 returned error can't find the container with id 4f57a5b938bb6d59909c3e50907ac1ac08a217caa123f4bad1f4e3b5c43cdd8e Feb 02 11:02:39 crc kubenswrapper[4925]: I0202 11:02:39.039116 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w"] Feb 02 11:02:39 crc kubenswrapper[4925]: W0202 11:02:39.050966 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe99d52f_77a5_45fc_b56b_7280d067fb05.slice/crio-a310035655e7de88a74f3624651e32c0610054d97a20ce0e5f094a0739e4dac6 WatchSource:0}: Error finding container a310035655e7de88a74f3624651e32c0610054d97a20ce0e5f094a0739e4dac6: Status 404 returned error can't find the container with id a310035655e7de88a74f3624651e32c0610054d97a20ce0e5f094a0739e4dac6 Feb 02 11:02:39 crc kubenswrapper[4925]: I0202 11:02:39.054124 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" event={"ID":"4d812945-4b7a-4215-9319-3eab3ea4f6e6","Type":"ContainerStarted","Data":"59b317be601d6d2918d711db4aa293c449b23145d4b80665482960ed3cc914e4"} Feb 02 11:02:39 crc kubenswrapper[4925]: I0202 11:02:39.054165 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" event={"ID":"4d812945-4b7a-4215-9319-3eab3ea4f6e6","Type":"ContainerStarted","Data":"4f57a5b938bb6d59909c3e50907ac1ac08a217caa123f4bad1f4e3b5c43cdd8e"} Feb 02 11:02:39 crc kubenswrapper[4925]: I0202 11:02:39.054498 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:39 crc kubenswrapper[4925]: I0202 11:02:39.064483 4925 patch_prober.go:28] interesting pod/controller-manager-76fbbf68d8-zt92b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Feb 02 11:02:39 crc kubenswrapper[4925]: I0202 11:02:39.064542 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" podUID="4d812945-4b7a-4215-9319-3eab3ea4f6e6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Feb 02 11:02:39 crc kubenswrapper[4925]: I0202 11:02:39.076435 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" podStartSLOduration=3.076411583 podStartE2EDuration="3.076411583s" podCreationTimestamp="2026-02-02 11:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:02:39.070349705 +0000 UTC m=+336.074598677" watchObservedRunningTime="2026-02-02 11:02:39.076411583 +0000 UTC m=+336.080660545" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.062633 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" event={"ID":"fe99d52f-77a5-45fc-b56b-7280d067fb05","Type":"ContainerStarted","Data":"7790c9eed5ebfb4b56ed756101c29dac8ab872a493533d7bb0aa5be847196d3f"} Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.062941 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" event={"ID":"fe99d52f-77a5-45fc-b56b-7280d067fb05","Type":"ContainerStarted","Data":"a310035655e7de88a74f3624651e32c0610054d97a20ce0e5f094a0739e4dac6"} Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.062849 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" podUID="fe99d52f-77a5-45fc-b56b-7280d067fb05" containerName="route-controller-manager" containerID="cri-o://7790c9eed5ebfb4b56ed756101c29dac8ab872a493533d7bb0aa5be847196d3f" gracePeriod=30 Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.068303 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76fbbf68d8-zt92b" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.082364 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" podStartSLOduration=4.082343761 podStartE2EDuration="4.082343761s" podCreationTimestamp="2026-02-02 11:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:02:40.080568211 +0000 UTC m=+337.084817193" watchObservedRunningTime="2026-02-02 11:02:40.082343761 +0000 UTC m=+337.086592723" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.550290 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.576973 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz"] Feb 02 11:02:40 crc kubenswrapper[4925]: E0202 11:02:40.577258 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe99d52f-77a5-45fc-b56b-7280d067fb05" containerName="route-controller-manager" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.577282 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe99d52f-77a5-45fc-b56b-7280d067fb05" containerName="route-controller-manager" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.577393 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe99d52f-77a5-45fc-b56b-7280d067fb05" containerName="route-controller-manager" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.577824 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.587015 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz"] Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.688794 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe99d52f-77a5-45fc-b56b-7280d067fb05-client-ca\") pod \"fe99d52f-77a5-45fc-b56b-7280d067fb05\" (UID: \"fe99d52f-77a5-45fc-b56b-7280d067fb05\") " Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.688874 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdgxl\" (UniqueName: \"kubernetes.io/projected/fe99d52f-77a5-45fc-b56b-7280d067fb05-kube-api-access-kdgxl\") pod \"fe99d52f-77a5-45fc-b56b-7280d067fb05\" (UID: \"fe99d52f-77a5-45fc-b56b-7280d067fb05\") " Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.688909 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe99d52f-77a5-45fc-b56b-7280d067fb05-serving-cert\") pod \"fe99d52f-77a5-45fc-b56b-7280d067fb05\" (UID: \"fe99d52f-77a5-45fc-b56b-7280d067fb05\") " Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.688961 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe99d52f-77a5-45fc-b56b-7280d067fb05-config\") pod \"fe99d52f-77a5-45fc-b56b-7280d067fb05\" (UID: \"fe99d52f-77a5-45fc-b56b-7280d067fb05\") " Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.689142 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxrgw\" (UniqueName: \"kubernetes.io/projected/40e37a50-98b3-4e8c-8ad3-03c87e86e651-kube-api-access-rxrgw\") pod \"route-controller-manager-679cd6988-fsdnz\" (UID: \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\") " pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.689190 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40e37a50-98b3-4e8c-8ad3-03c87e86e651-serving-cert\") pod \"route-controller-manager-679cd6988-fsdnz\" (UID: \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\") " pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.689216 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40e37a50-98b3-4e8c-8ad3-03c87e86e651-config\") pod \"route-controller-manager-679cd6988-fsdnz\" (UID: \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\") " pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.689287 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40e37a50-98b3-4e8c-8ad3-03c87e86e651-client-ca\") pod \"route-controller-manager-679cd6988-fsdnz\" (UID: \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\") " pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.689537 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe99d52f-77a5-45fc-b56b-7280d067fb05-client-ca" (OuterVolumeSpecName: "client-ca") pod "fe99d52f-77a5-45fc-b56b-7280d067fb05" (UID: "fe99d52f-77a5-45fc-b56b-7280d067fb05"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.689645 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe99d52f-77a5-45fc-b56b-7280d067fb05-config" (OuterVolumeSpecName: "config") pod "fe99d52f-77a5-45fc-b56b-7280d067fb05" (UID: "fe99d52f-77a5-45fc-b56b-7280d067fb05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.695208 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe99d52f-77a5-45fc-b56b-7280d067fb05-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fe99d52f-77a5-45fc-b56b-7280d067fb05" (UID: "fe99d52f-77a5-45fc-b56b-7280d067fb05"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.703701 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe99d52f-77a5-45fc-b56b-7280d067fb05-kube-api-access-kdgxl" (OuterVolumeSpecName: "kube-api-access-kdgxl") pod "fe99d52f-77a5-45fc-b56b-7280d067fb05" (UID: "fe99d52f-77a5-45fc-b56b-7280d067fb05"). InnerVolumeSpecName "kube-api-access-kdgxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.790210 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40e37a50-98b3-4e8c-8ad3-03c87e86e651-client-ca\") pod \"route-controller-manager-679cd6988-fsdnz\" (UID: \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\") " pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.790306 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxrgw\" (UniqueName: \"kubernetes.io/projected/40e37a50-98b3-4e8c-8ad3-03c87e86e651-kube-api-access-rxrgw\") pod \"route-controller-manager-679cd6988-fsdnz\" (UID: \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\") " pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.790348 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40e37a50-98b3-4e8c-8ad3-03c87e86e651-serving-cert\") pod \"route-controller-manager-679cd6988-fsdnz\" (UID: \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\") " pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.790374 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40e37a50-98b3-4e8c-8ad3-03c87e86e651-config\") pod \"route-controller-manager-679cd6988-fsdnz\" (UID: \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\") " pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.790433 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe99d52f-77a5-45fc-b56b-7280d067fb05-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.790444 4925 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe99d52f-77a5-45fc-b56b-7280d067fb05-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.790453 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdgxl\" (UniqueName: \"kubernetes.io/projected/fe99d52f-77a5-45fc-b56b-7280d067fb05-kube-api-access-kdgxl\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.790468 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe99d52f-77a5-45fc-b56b-7280d067fb05-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.791536 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40e37a50-98b3-4e8c-8ad3-03c87e86e651-client-ca\") pod \"route-controller-manager-679cd6988-fsdnz\" (UID: \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\") " pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.791739 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40e37a50-98b3-4e8c-8ad3-03c87e86e651-config\") pod \"route-controller-manager-679cd6988-fsdnz\" (UID: \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\") " pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.794605 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40e37a50-98b3-4e8c-8ad3-03c87e86e651-serving-cert\") pod \"route-controller-manager-679cd6988-fsdnz\" (UID: \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\") " pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.816310 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxrgw\" (UniqueName: \"kubernetes.io/projected/40e37a50-98b3-4e8c-8ad3-03c87e86e651-kube-api-access-rxrgw\") pod \"route-controller-manager-679cd6988-fsdnz\" (UID: \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\") " pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:40 crc kubenswrapper[4925]: I0202 11:02:40.891598 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:41 crc kubenswrapper[4925]: I0202 11:02:41.075929 4925 generic.go:334] "Generic (PLEG): container finished" podID="fe99d52f-77a5-45fc-b56b-7280d067fb05" containerID="7790c9eed5ebfb4b56ed756101c29dac8ab872a493533d7bb0aa5be847196d3f" exitCode=0 Feb 02 11:02:41 crc kubenswrapper[4925]: I0202 11:02:41.076009 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" Feb 02 11:02:41 crc kubenswrapper[4925]: I0202 11:02:41.076009 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" event={"ID":"fe99d52f-77a5-45fc-b56b-7280d067fb05","Type":"ContainerDied","Data":"7790c9eed5ebfb4b56ed756101c29dac8ab872a493533d7bb0aa5be847196d3f"} Feb 02 11:02:41 crc kubenswrapper[4925]: I0202 11:02:41.077123 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w" event={"ID":"fe99d52f-77a5-45fc-b56b-7280d067fb05","Type":"ContainerDied","Data":"a310035655e7de88a74f3624651e32c0610054d97a20ce0e5f094a0739e4dac6"} Feb 02 11:02:41 crc kubenswrapper[4925]: I0202 11:02:41.077164 4925 scope.go:117] "RemoveContainer" containerID="7790c9eed5ebfb4b56ed756101c29dac8ab872a493533d7bb0aa5be847196d3f" Feb 02 11:02:41 crc kubenswrapper[4925]: I0202 11:02:41.104007 4925 scope.go:117] "RemoveContainer" containerID="7790c9eed5ebfb4b56ed756101c29dac8ab872a493533d7bb0aa5be847196d3f" Feb 02 11:02:41 crc kubenswrapper[4925]: E0202 11:02:41.106160 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7790c9eed5ebfb4b56ed756101c29dac8ab872a493533d7bb0aa5be847196d3f\": container with ID starting with 7790c9eed5ebfb4b56ed756101c29dac8ab872a493533d7bb0aa5be847196d3f not found: ID does not exist" containerID="7790c9eed5ebfb4b56ed756101c29dac8ab872a493533d7bb0aa5be847196d3f" Feb 02 11:02:41 crc kubenswrapper[4925]: I0202 11:02:41.106209 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7790c9eed5ebfb4b56ed756101c29dac8ab872a493533d7bb0aa5be847196d3f"} err="failed to get container status \"7790c9eed5ebfb4b56ed756101c29dac8ab872a493533d7bb0aa5be847196d3f\": rpc error: code = NotFound desc = could not find container \"7790c9eed5ebfb4b56ed756101c29dac8ab872a493533d7bb0aa5be847196d3f\": container with ID starting with 7790c9eed5ebfb4b56ed756101c29dac8ab872a493533d7bb0aa5be847196d3f not found: ID does not exist" Feb 02 11:02:41 crc kubenswrapper[4925]: I0202 11:02:41.110798 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w"] Feb 02 11:02:41 crc kubenswrapper[4925]: I0202 11:02:41.115564 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d45d745b-h2s6w"] Feb 02 11:02:41 crc kubenswrapper[4925]: I0202 11:02:41.288750 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz"] Feb 02 11:02:42 crc kubenswrapper[4925]: I0202 11:02:42.083965 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" event={"ID":"40e37a50-98b3-4e8c-8ad3-03c87e86e651","Type":"ContainerStarted","Data":"3517a1d80f34143d2a6c4e371121cd279d3041df2e42a58578cc521f95c00622"} Feb 02 11:02:42 crc kubenswrapper[4925]: I0202 11:02:42.084343 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" event={"ID":"40e37a50-98b3-4e8c-8ad3-03c87e86e651","Type":"ContainerStarted","Data":"8943cdc9c5f0df74efed73151be7f2a478f20d241981a69678345300f7cb5fb0"} Feb 02 11:02:42 crc kubenswrapper[4925]: I0202 11:02:42.084360 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:42 crc kubenswrapper[4925]: I0202 11:02:42.089663 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:42 crc kubenswrapper[4925]: I0202 11:02:42.100318 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" podStartSLOduration=4.100296054 podStartE2EDuration="4.100296054s" podCreationTimestamp="2026-02-02 11:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:02:42.098155385 +0000 UTC m=+339.102404357" watchObservedRunningTime="2026-02-02 11:02:42.100296054 +0000 UTC m=+339.104545036" Feb 02 11:02:42 crc kubenswrapper[4925]: I0202 11:02:42.671050 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe99d52f-77a5-45fc-b56b-7280d067fb05" path="/var/lib/kubelet/pods/fe99d52f-77a5-45fc-b56b-7280d067fb05/volumes" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.762595 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t2q7k"] Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.764557 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.788503 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t2q7k"] Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.848930 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.848977 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5nqh\" (UniqueName: \"kubernetes.io/projected/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-kube-api-access-r5nqh\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.849036 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-trusted-ca\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.849199 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.849311 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.849386 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-registry-certificates\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.849436 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-registry-tls\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.849498 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-bound-sa-token\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.872181 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.950303 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.950356 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-registry-certificates\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.950376 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-registry-tls\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.950395 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-bound-sa-token\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.950435 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.950454 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5nqh\" (UniqueName: \"kubernetes.io/projected/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-kube-api-access-r5nqh\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.950497 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-trusted-ca\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.951532 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.952330 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-trusted-ca\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.952477 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-registry-certificates\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.955677 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-registry-tls\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.957793 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.967441 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-bound-sa-token\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:45 crc kubenswrapper[4925]: I0202 11:02:45.968579 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5nqh\" (UniqueName: \"kubernetes.io/projected/9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2-kube-api-access-r5nqh\") pod \"image-registry-66df7c8f76-t2q7k\" (UID: \"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:46 crc kubenswrapper[4925]: I0202 11:02:46.113964 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:46 crc kubenswrapper[4925]: I0202 11:02:46.505431 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t2q7k"] Feb 02 11:02:47 crc kubenswrapper[4925]: I0202 11:02:47.112506 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" event={"ID":"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2","Type":"ContainerStarted","Data":"946f3800d813cdd4da074c64b3816fe99303309f46a4256b27a00170a1e53604"} Feb 02 11:02:47 crc kubenswrapper[4925]: I0202 11:02:47.112877 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" event={"ID":"9dbd14ef-8c1e-45e2-b916-1a9ff852ffd2","Type":"ContainerStarted","Data":"991a2073bd327f82aeb4c310573dccfbc9f5310f26b586ac94d38a8fcbd404f9"} Feb 02 11:02:47 crc kubenswrapper[4925]: I0202 11:02:47.112895 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:02:58 crc kubenswrapper[4925]: I0202 11:02:58.596372 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" podStartSLOduration=13.596346654 podStartE2EDuration="13.596346654s" podCreationTimestamp="2026-02-02 11:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:02:47.146883992 +0000 UTC m=+344.151133024" watchObservedRunningTime="2026-02-02 11:02:58.596346654 +0000 UTC m=+355.600595626" Feb 02 11:02:58 crc kubenswrapper[4925]: I0202 11:02:58.597494 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz"] Feb 02 11:02:58 crc kubenswrapper[4925]: I0202 11:02:58.597714 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" podUID="40e37a50-98b3-4e8c-8ad3-03c87e86e651" containerName="route-controller-manager" containerID="cri-o://3517a1d80f34143d2a6c4e371121cd279d3041df2e42a58578cc521f95c00622" gracePeriod=30 Feb 02 11:02:58 crc kubenswrapper[4925]: E0202 11:02:58.726341 4925 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40e37a50_98b3_4e8c_8ad3_03c87e86e651.slice/crio-3517a1d80f34143d2a6c4e371121cd279d3041df2e42a58578cc521f95c00622.scope\": RecentStats: unable to find data in memory cache]" Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.093734 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.194747 4925 generic.go:334] "Generic (PLEG): container finished" podID="40e37a50-98b3-4e8c-8ad3-03c87e86e651" containerID="3517a1d80f34143d2a6c4e371121cd279d3041df2e42a58578cc521f95c00622" exitCode=0 Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.194799 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" event={"ID":"40e37a50-98b3-4e8c-8ad3-03c87e86e651","Type":"ContainerDied","Data":"3517a1d80f34143d2a6c4e371121cd279d3041df2e42a58578cc521f95c00622"} Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.194808 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.194826 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz" event={"ID":"40e37a50-98b3-4e8c-8ad3-03c87e86e651","Type":"ContainerDied","Data":"8943cdc9c5f0df74efed73151be7f2a478f20d241981a69678345300f7cb5fb0"} Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.194845 4925 scope.go:117] "RemoveContainer" containerID="3517a1d80f34143d2a6c4e371121cd279d3041df2e42a58578cc521f95c00622" Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.211192 4925 scope.go:117] "RemoveContainer" containerID="3517a1d80f34143d2a6c4e371121cd279d3041df2e42a58578cc521f95c00622" Feb 02 11:02:59 crc kubenswrapper[4925]: E0202 11:02:59.211674 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3517a1d80f34143d2a6c4e371121cd279d3041df2e42a58578cc521f95c00622\": container with ID starting with 3517a1d80f34143d2a6c4e371121cd279d3041df2e42a58578cc521f95c00622 not found: ID does not exist" containerID="3517a1d80f34143d2a6c4e371121cd279d3041df2e42a58578cc521f95c00622" Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.211725 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3517a1d80f34143d2a6c4e371121cd279d3041df2e42a58578cc521f95c00622"} err="failed to get container status \"3517a1d80f34143d2a6c4e371121cd279d3041df2e42a58578cc521f95c00622\": rpc error: code = NotFound desc = could not find container \"3517a1d80f34143d2a6c4e371121cd279d3041df2e42a58578cc521f95c00622\": container with ID starting with 3517a1d80f34143d2a6c4e371121cd279d3041df2e42a58578cc521f95c00622 not found: ID does not exist" Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.271579 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40e37a50-98b3-4e8c-8ad3-03c87e86e651-client-ca\") pod \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\" (UID: \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\") " Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.271707 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40e37a50-98b3-4e8c-8ad3-03c87e86e651-serving-cert\") pod \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\" (UID: \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\") " Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.271808 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxrgw\" (UniqueName: \"kubernetes.io/projected/40e37a50-98b3-4e8c-8ad3-03c87e86e651-kube-api-access-rxrgw\") pod \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\" (UID: \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\") " Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.271872 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40e37a50-98b3-4e8c-8ad3-03c87e86e651-config\") pod \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\" (UID: \"40e37a50-98b3-4e8c-8ad3-03c87e86e651\") " Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.272416 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40e37a50-98b3-4e8c-8ad3-03c87e86e651-client-ca" (OuterVolumeSpecName: "client-ca") pod "40e37a50-98b3-4e8c-8ad3-03c87e86e651" (UID: "40e37a50-98b3-4e8c-8ad3-03c87e86e651"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.272658 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40e37a50-98b3-4e8c-8ad3-03c87e86e651-config" (OuterVolumeSpecName: "config") pod "40e37a50-98b3-4e8c-8ad3-03c87e86e651" (UID: "40e37a50-98b3-4e8c-8ad3-03c87e86e651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.273130 4925 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40e37a50-98b3-4e8c-8ad3-03c87e86e651-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.273169 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40e37a50-98b3-4e8c-8ad3-03c87e86e651-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.277775 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e37a50-98b3-4e8c-8ad3-03c87e86e651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "40e37a50-98b3-4e8c-8ad3-03c87e86e651" (UID: "40e37a50-98b3-4e8c-8ad3-03c87e86e651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.278225 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e37a50-98b3-4e8c-8ad3-03c87e86e651-kube-api-access-rxrgw" (OuterVolumeSpecName: "kube-api-access-rxrgw") pod "40e37a50-98b3-4e8c-8ad3-03c87e86e651" (UID: "40e37a50-98b3-4e8c-8ad3-03c87e86e651"). InnerVolumeSpecName "kube-api-access-rxrgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.378674 4925 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40e37a50-98b3-4e8c-8ad3-03c87e86e651-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.378743 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxrgw\" (UniqueName: \"kubernetes.io/projected/40e37a50-98b3-4e8c-8ad3-03c87e86e651-kube-api-access-rxrgw\") on node \"crc\" DevicePath \"\"" Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.538048 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz"] Feb 02 11:02:59 crc kubenswrapper[4925]: I0202 11:02:59.545710 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-679cd6988-fsdnz"] Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.358297 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm"] Feb 02 11:03:00 crc kubenswrapper[4925]: E0202 11:03:00.358758 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e37a50-98b3-4e8c-8ad3-03c87e86e651" containerName="route-controller-manager" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.358791 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e37a50-98b3-4e8c-8ad3-03c87e86e651" containerName="route-controller-manager" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.359348 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e37a50-98b3-4e8c-8ad3-03c87e86e651" containerName="route-controller-manager" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.360003 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.363202 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.363370 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.363374 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.363847 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.364050 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.364620 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.378277 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm"] Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.491606 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a72d505d-6086-4d7b-8ff4-b0d3efe0163b-serving-cert\") pod \"route-controller-manager-7d45d745b-29mdm\" (UID: \"a72d505d-6086-4d7b-8ff4-b0d3efe0163b\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.491935 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a72d505d-6086-4d7b-8ff4-b0d3efe0163b-client-ca\") pod \"route-controller-manager-7d45d745b-29mdm\" (UID: \"a72d505d-6086-4d7b-8ff4-b0d3efe0163b\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.492136 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk7w5\" (UniqueName: \"kubernetes.io/projected/a72d505d-6086-4d7b-8ff4-b0d3efe0163b-kube-api-access-tk7w5\") pod \"route-controller-manager-7d45d745b-29mdm\" (UID: \"a72d505d-6086-4d7b-8ff4-b0d3efe0163b\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.492210 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72d505d-6086-4d7b-8ff4-b0d3efe0163b-config\") pod \"route-controller-manager-7d45d745b-29mdm\" (UID: \"a72d505d-6086-4d7b-8ff4-b0d3efe0163b\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.593104 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a72d505d-6086-4d7b-8ff4-b0d3efe0163b-client-ca\") pod \"route-controller-manager-7d45d745b-29mdm\" (UID: \"a72d505d-6086-4d7b-8ff4-b0d3efe0163b\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.593159 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk7w5\" (UniqueName: \"kubernetes.io/projected/a72d505d-6086-4d7b-8ff4-b0d3efe0163b-kube-api-access-tk7w5\") pod \"route-controller-manager-7d45d745b-29mdm\" (UID: \"a72d505d-6086-4d7b-8ff4-b0d3efe0163b\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.593212 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72d505d-6086-4d7b-8ff4-b0d3efe0163b-config\") pod \"route-controller-manager-7d45d745b-29mdm\" (UID: \"a72d505d-6086-4d7b-8ff4-b0d3efe0163b\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.593270 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a72d505d-6086-4d7b-8ff4-b0d3efe0163b-serving-cert\") pod \"route-controller-manager-7d45d745b-29mdm\" (UID: \"a72d505d-6086-4d7b-8ff4-b0d3efe0163b\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.594522 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a72d505d-6086-4d7b-8ff4-b0d3efe0163b-client-ca\") pod \"route-controller-manager-7d45d745b-29mdm\" (UID: \"a72d505d-6086-4d7b-8ff4-b0d3efe0163b\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.595329 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72d505d-6086-4d7b-8ff4-b0d3efe0163b-config\") pod \"route-controller-manager-7d45d745b-29mdm\" (UID: \"a72d505d-6086-4d7b-8ff4-b0d3efe0163b\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.598148 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a72d505d-6086-4d7b-8ff4-b0d3efe0163b-serving-cert\") pod \"route-controller-manager-7d45d745b-29mdm\" (UID: \"a72d505d-6086-4d7b-8ff4-b0d3efe0163b\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.615866 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk7w5\" (UniqueName: \"kubernetes.io/projected/a72d505d-6086-4d7b-8ff4-b0d3efe0163b-kube-api-access-tk7w5\") pod \"route-controller-manager-7d45d745b-29mdm\" (UID: \"a72d505d-6086-4d7b-8ff4-b0d3efe0163b\") " pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.670531 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40e37a50-98b3-4e8c-8ad3-03c87e86e651" path="/var/lib/kubelet/pods/40e37a50-98b3-4e8c-8ad3-03c87e86e651/volumes" Feb 02 11:03:00 crc kubenswrapper[4925]: I0202 11:03:00.687785 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" Feb 02 11:03:01 crc kubenswrapper[4925]: I0202 11:03:01.072564 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm"] Feb 02 11:03:01 crc kubenswrapper[4925]: I0202 11:03:01.214424 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" event={"ID":"a72d505d-6086-4d7b-8ff4-b0d3efe0163b","Type":"ContainerStarted","Data":"64700cfa97bcd5a61733b1d10ade49a5abb621c707304eb5a428cdc1f803d21d"} Feb 02 11:03:02 crc kubenswrapper[4925]: I0202 11:03:02.221554 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" event={"ID":"a72d505d-6086-4d7b-8ff4-b0d3efe0163b","Type":"ContainerStarted","Data":"75865ee997313601349abc705ccb32edf5313df6128adb3793a112ccc69de193"} Feb 02 11:03:02 crc kubenswrapper[4925]: I0202 11:03:02.222227 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" Feb 02 11:03:02 crc kubenswrapper[4925]: I0202 11:03:02.227169 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" Feb 02 11:03:02 crc kubenswrapper[4925]: I0202 11:03:02.239199 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d45d745b-29mdm" podStartSLOduration=4.239182686 podStartE2EDuration="4.239182686s" podCreationTimestamp="2026-02-02 11:02:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:03:02.237194701 +0000 UTC m=+359.241443663" watchObservedRunningTime="2026-02-02 11:03:02.239182686 +0000 UTC m=+359.243431648" Feb 02 11:03:06 crc kubenswrapper[4925]: I0202 11:03:06.120454 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-t2q7k" Feb 02 11:03:06 crc kubenswrapper[4925]: I0202 11:03:06.191876 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-56md8"] Feb 02 11:03:13 crc kubenswrapper[4925]: I0202 11:03:13.399412 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:03:13 crc kubenswrapper[4925]: I0202 11:03:13.400747 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:03:17 crc kubenswrapper[4925]: I0202 11:03:17.976747 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gcc4h"] Feb 02 11:03:17 crc kubenswrapper[4925]: I0202 11:03:17.978449 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gcc4h" podUID="0f7aa95c-3861-48ab-a30f-0301aad169d7" containerName="registry-server" containerID="cri-o://2be20294d9fa6bfab7e85cd7ffe04f972c8c7637fa4d7064e8757ee89f751818" gracePeriod=30 Feb 02 11:03:17 crc kubenswrapper[4925]: I0202 11:03:17.980932 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xf5qm"] Feb 02 11:03:17 crc kubenswrapper[4925]: I0202 11:03:17.981204 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xf5qm" podUID="51968f99-bd7d-4958-bb6f-ba8035b2e637" containerName="registry-server" containerID="cri-o://b400fc8915bc867126cef5aca6f8d1dbf6fee7279269bcc3d6a6a6d09b9862e9" gracePeriod=30 Feb 02 11:03:17 crc kubenswrapper[4925]: I0202 11:03:17.985942 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kn82k"] Feb 02 11:03:17 crc kubenswrapper[4925]: I0202 11:03:17.986207 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" podUID="c8e6ecfa-3855-4fee-890a-2a88f84dc8a4" containerName="marketplace-operator" containerID="cri-o://edbdfd92eab9ae719ac07ecfa2fec52db673f349de35e1d9801518852f6d3afa" gracePeriod=30 Feb 02 11:03:17 crc kubenswrapper[4925]: I0202 11:03:17.994852 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5f7d"] Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:17.995491 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d5f7d" podUID="c985f150-ec7d-4175-99a1-8fb775b7d7d9" containerName="registry-server" containerID="cri-o://377df476e9567222ae4dcfcf4311f03b6963952208edb4c6b264d25f405679eb" gracePeriod=30 Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.004016 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gmldm"] Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.004257 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gmldm" podUID="b01fd158-f4e2-4ec8-953b-12dae9c49dd7" containerName="registry-server" containerID="cri-o://cd4ce0781ba4db9241633b3a8fba846243cb389de8339e0a2bf348e9b9a12a51" gracePeriod=30 Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.009229 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6p5nd"] Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.010123 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6p5nd" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.022507 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6p5nd"] Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.056990 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7ed2f286-6b23-4789-9f42-9da9d276812e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6p5nd\" (UID: \"7ed2f286-6b23-4789-9f42-9da9d276812e\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p5nd" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.057030 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zglb6\" (UniqueName: \"kubernetes.io/projected/7ed2f286-6b23-4789-9f42-9da9d276812e-kube-api-access-zglb6\") pod \"marketplace-operator-79b997595-6p5nd\" (UID: \"7ed2f286-6b23-4789-9f42-9da9d276812e\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p5nd" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.057318 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ed2f286-6b23-4789-9f42-9da9d276812e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6p5nd\" (UID: \"7ed2f286-6b23-4789-9f42-9da9d276812e\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p5nd" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.158702 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7ed2f286-6b23-4789-9f42-9da9d276812e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6p5nd\" (UID: \"7ed2f286-6b23-4789-9f42-9da9d276812e\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p5nd" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.159777 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zglb6\" (UniqueName: \"kubernetes.io/projected/7ed2f286-6b23-4789-9f42-9da9d276812e-kube-api-access-zglb6\") pod \"marketplace-operator-79b997595-6p5nd\" (UID: \"7ed2f286-6b23-4789-9f42-9da9d276812e\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p5nd" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.159860 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ed2f286-6b23-4789-9f42-9da9d276812e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6p5nd\" (UID: \"7ed2f286-6b23-4789-9f42-9da9d276812e\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p5nd" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.161610 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ed2f286-6b23-4789-9f42-9da9d276812e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6p5nd\" (UID: \"7ed2f286-6b23-4789-9f42-9da9d276812e\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p5nd" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.164893 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7ed2f286-6b23-4789-9f42-9da9d276812e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6p5nd\" (UID: \"7ed2f286-6b23-4789-9f42-9da9d276812e\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p5nd" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.192726 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zglb6\" (UniqueName: \"kubernetes.io/projected/7ed2f286-6b23-4789-9f42-9da9d276812e-kube-api-access-zglb6\") pod \"marketplace-operator-79b997595-6p5nd\" (UID: \"7ed2f286-6b23-4789-9f42-9da9d276812e\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p5nd" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.309333 4925 generic.go:334] "Generic (PLEG): container finished" podID="0f7aa95c-3861-48ab-a30f-0301aad169d7" containerID="2be20294d9fa6bfab7e85cd7ffe04f972c8c7637fa4d7064e8757ee89f751818" exitCode=0 Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.309424 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcc4h" event={"ID":"0f7aa95c-3861-48ab-a30f-0301aad169d7","Type":"ContainerDied","Data":"2be20294d9fa6bfab7e85cd7ffe04f972c8c7637fa4d7064e8757ee89f751818"} Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.311962 4925 generic.go:334] "Generic (PLEG): container finished" podID="b01fd158-f4e2-4ec8-953b-12dae9c49dd7" containerID="cd4ce0781ba4db9241633b3a8fba846243cb389de8339e0a2bf348e9b9a12a51" exitCode=0 Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.312033 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gmldm" event={"ID":"b01fd158-f4e2-4ec8-953b-12dae9c49dd7","Type":"ContainerDied","Data":"cd4ce0781ba4db9241633b3a8fba846243cb389de8339e0a2bf348e9b9a12a51"} Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.313752 4925 generic.go:334] "Generic (PLEG): container finished" podID="c985f150-ec7d-4175-99a1-8fb775b7d7d9" containerID="377df476e9567222ae4dcfcf4311f03b6963952208edb4c6b264d25f405679eb" exitCode=0 Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.313809 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5f7d" event={"ID":"c985f150-ec7d-4175-99a1-8fb775b7d7d9","Type":"ContainerDied","Data":"377df476e9567222ae4dcfcf4311f03b6963952208edb4c6b264d25f405679eb"} Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.315233 4925 generic.go:334] "Generic (PLEG): container finished" podID="c8e6ecfa-3855-4fee-890a-2a88f84dc8a4" containerID="edbdfd92eab9ae719ac07ecfa2fec52db673f349de35e1d9801518852f6d3afa" exitCode=0 Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.315280 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" event={"ID":"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4","Type":"ContainerDied","Data":"edbdfd92eab9ae719ac07ecfa2fec52db673f349de35e1d9801518852f6d3afa"} Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.315310 4925 scope.go:117] "RemoveContainer" containerID="e1fbc146738b64aa9bb6292522b265aeb87055a14d0a10b63b07be753af3cd5a" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.317709 4925 generic.go:334] "Generic (PLEG): container finished" podID="51968f99-bd7d-4958-bb6f-ba8035b2e637" containerID="b400fc8915bc867126cef5aca6f8d1dbf6fee7279269bcc3d6a6a6d09b9862e9" exitCode=0 Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.317735 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xf5qm" event={"ID":"51968f99-bd7d-4958-bb6f-ba8035b2e637","Type":"ContainerDied","Data":"b400fc8915bc867126cef5aca6f8d1dbf6fee7279269bcc3d6a6a6d09b9862e9"} Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.330806 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6p5nd" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.484131 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gcc4h" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.551657 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xf5qm" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.554844 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5f7d" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.564464 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f7aa95c-3861-48ab-a30f-0301aad169d7-utilities\") pod \"0f7aa95c-3861-48ab-a30f-0301aad169d7\" (UID: \"0f7aa95c-3861-48ab-a30f-0301aad169d7\") " Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.564580 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f7aa95c-3861-48ab-a30f-0301aad169d7-catalog-content\") pod \"0f7aa95c-3861-48ab-a30f-0301aad169d7\" (UID: \"0f7aa95c-3861-48ab-a30f-0301aad169d7\") " Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.564611 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsdkw\" (UniqueName: \"kubernetes.io/projected/c985f150-ec7d-4175-99a1-8fb775b7d7d9-kube-api-access-lsdkw\") pod \"c985f150-ec7d-4175-99a1-8fb775b7d7d9\" (UID: \"c985f150-ec7d-4175-99a1-8fb775b7d7d9\") " Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.564639 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c985f150-ec7d-4175-99a1-8fb775b7d7d9-catalog-content\") pod \"c985f150-ec7d-4175-99a1-8fb775b7d7d9\" (UID: \"c985f150-ec7d-4175-99a1-8fb775b7d7d9\") " Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.564680 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c985f150-ec7d-4175-99a1-8fb775b7d7d9-utilities\") pod \"c985f150-ec7d-4175-99a1-8fb775b7d7d9\" (UID: \"c985f150-ec7d-4175-99a1-8fb775b7d7d9\") " Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.564702 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51968f99-bd7d-4958-bb6f-ba8035b2e637-utilities\") pod \"51968f99-bd7d-4958-bb6f-ba8035b2e637\" (UID: \"51968f99-bd7d-4958-bb6f-ba8035b2e637\") " Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.564753 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51968f99-bd7d-4958-bb6f-ba8035b2e637-catalog-content\") pod \"51968f99-bd7d-4958-bb6f-ba8035b2e637\" (UID: \"51968f99-bd7d-4958-bb6f-ba8035b2e637\") " Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.564781 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdjsx\" (UniqueName: \"kubernetes.io/projected/51968f99-bd7d-4958-bb6f-ba8035b2e637-kube-api-access-pdjsx\") pod \"51968f99-bd7d-4958-bb6f-ba8035b2e637\" (UID: \"51968f99-bd7d-4958-bb6f-ba8035b2e637\") " Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.564809 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wclhj\" (UniqueName: \"kubernetes.io/projected/0f7aa95c-3861-48ab-a30f-0301aad169d7-kube-api-access-wclhj\") pod \"0f7aa95c-3861-48ab-a30f-0301aad169d7\" (UID: \"0f7aa95c-3861-48ab-a30f-0301aad169d7\") " Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.566788 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c985f150-ec7d-4175-99a1-8fb775b7d7d9-utilities" (OuterVolumeSpecName: "utilities") pod "c985f150-ec7d-4175-99a1-8fb775b7d7d9" (UID: "c985f150-ec7d-4175-99a1-8fb775b7d7d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.567450 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51968f99-bd7d-4958-bb6f-ba8035b2e637-utilities" (OuterVolumeSpecName: "utilities") pod "51968f99-bd7d-4958-bb6f-ba8035b2e637" (UID: "51968f99-bd7d-4958-bb6f-ba8035b2e637"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.568559 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f7aa95c-3861-48ab-a30f-0301aad169d7-utilities" (OuterVolumeSpecName: "utilities") pod "0f7aa95c-3861-48ab-a30f-0301aad169d7" (UID: "0f7aa95c-3861-48ab-a30f-0301aad169d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.578180 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51968f99-bd7d-4958-bb6f-ba8035b2e637-kube-api-access-pdjsx" (OuterVolumeSpecName: "kube-api-access-pdjsx") pod "51968f99-bd7d-4958-bb6f-ba8035b2e637" (UID: "51968f99-bd7d-4958-bb6f-ba8035b2e637"). InnerVolumeSpecName "kube-api-access-pdjsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.578306 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c985f150-ec7d-4175-99a1-8fb775b7d7d9-kube-api-access-lsdkw" (OuterVolumeSpecName: "kube-api-access-lsdkw") pod "c985f150-ec7d-4175-99a1-8fb775b7d7d9" (UID: "c985f150-ec7d-4175-99a1-8fb775b7d7d9"). InnerVolumeSpecName "kube-api-access-lsdkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.584293 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f7aa95c-3861-48ab-a30f-0301aad169d7-kube-api-access-wclhj" (OuterVolumeSpecName: "kube-api-access-wclhj") pod "0f7aa95c-3861-48ab-a30f-0301aad169d7" (UID: "0f7aa95c-3861-48ab-a30f-0301aad169d7"). InnerVolumeSpecName "kube-api-access-wclhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.594977 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c985f150-ec7d-4175-99a1-8fb775b7d7d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c985f150-ec7d-4175-99a1-8fb775b7d7d9" (UID: "c985f150-ec7d-4175-99a1-8fb775b7d7d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.617826 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gmldm" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.631818 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51968f99-bd7d-4958-bb6f-ba8035b2e637-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51968f99-bd7d-4958-bb6f-ba8035b2e637" (UID: "51968f99-bd7d-4958-bb6f-ba8035b2e637"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.635016 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f7aa95c-3861-48ab-a30f-0301aad169d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f7aa95c-3861-48ab-a30f-0301aad169d7" (UID: "0f7aa95c-3861-48ab-a30f-0301aad169d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.648248 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.667212 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-marketplace-operator-metrics\") pod \"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4\" (UID: \"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4\") " Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.667810 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-utilities\") pod \"b01fd158-f4e2-4ec8-953b-12dae9c49dd7\" (UID: \"b01fd158-f4e2-4ec8-953b-12dae9c49dd7\") " Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.667844 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lchfb\" (UniqueName: \"kubernetes.io/projected/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-kube-api-access-lchfb\") pod \"b01fd158-f4e2-4ec8-953b-12dae9c49dd7\" (UID: \"b01fd158-f4e2-4ec8-953b-12dae9c49dd7\") " Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.667889 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-marketplace-trusted-ca\") pod \"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4\" (UID: \"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4\") " Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.667908 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w67tf\" (UniqueName: \"kubernetes.io/projected/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-kube-api-access-w67tf\") pod \"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4\" (UID: \"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4\") " Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.667933 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-catalog-content\") pod \"b01fd158-f4e2-4ec8-953b-12dae9c49dd7\" (UID: \"b01fd158-f4e2-4ec8-953b-12dae9c49dd7\") " Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.668183 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f7aa95c-3861-48ab-a30f-0301aad169d7-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.668200 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f7aa95c-3861-48ab-a30f-0301aad169d7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.668214 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsdkw\" (UniqueName: \"kubernetes.io/projected/c985f150-ec7d-4175-99a1-8fb775b7d7d9-kube-api-access-lsdkw\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.668225 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c985f150-ec7d-4175-99a1-8fb775b7d7d9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.668257 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c985f150-ec7d-4175-99a1-8fb775b7d7d9-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.668265 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51968f99-bd7d-4958-bb6f-ba8035b2e637-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.668275 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51968f99-bd7d-4958-bb6f-ba8035b2e637-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.668330 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdjsx\" (UniqueName: \"kubernetes.io/projected/51968f99-bd7d-4958-bb6f-ba8035b2e637-kube-api-access-pdjsx\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.668343 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wclhj\" (UniqueName: \"kubernetes.io/projected/0f7aa95c-3861-48ab-a30f-0301aad169d7-kube-api-access-wclhj\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.672931 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c8e6ecfa-3855-4fee-890a-2a88f84dc8a4" (UID: "c8e6ecfa-3855-4fee-890a-2a88f84dc8a4"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.673862 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-utilities" (OuterVolumeSpecName: "utilities") pod "b01fd158-f4e2-4ec8-953b-12dae9c49dd7" (UID: "b01fd158-f4e2-4ec8-953b-12dae9c49dd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.674346 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-kube-api-access-lchfb" (OuterVolumeSpecName: "kube-api-access-lchfb") pod "b01fd158-f4e2-4ec8-953b-12dae9c49dd7" (UID: "b01fd158-f4e2-4ec8-953b-12dae9c49dd7"). InnerVolumeSpecName "kube-api-access-lchfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.676787 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-kube-api-access-w67tf" (OuterVolumeSpecName: "kube-api-access-w67tf") pod "c8e6ecfa-3855-4fee-890a-2a88f84dc8a4" (UID: "c8e6ecfa-3855-4fee-890a-2a88f84dc8a4"). InnerVolumeSpecName "kube-api-access-w67tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.682516 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c8e6ecfa-3855-4fee-890a-2a88f84dc8a4" (UID: "c8e6ecfa-3855-4fee-890a-2a88f84dc8a4"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.770787 4925 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.771171 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.771309 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lchfb\" (UniqueName: \"kubernetes.io/projected/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-kube-api-access-lchfb\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.771326 4925 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.771336 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w67tf\" (UniqueName: \"kubernetes.io/projected/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4-kube-api-access-w67tf\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.784492 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b01fd158-f4e2-4ec8-953b-12dae9c49dd7" (UID: "b01fd158-f4e2-4ec8-953b-12dae9c49dd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.872392 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01fd158-f4e2-4ec8-953b-12dae9c49dd7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:18 crc kubenswrapper[4925]: I0202 11:03:18.903550 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6p5nd"] Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.324921 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gmldm" event={"ID":"b01fd158-f4e2-4ec8-953b-12dae9c49dd7","Type":"ContainerDied","Data":"4c352cb4fd3c210467067bbe055d853399e9a729a1c47ef7539a4e73ba68620f"} Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.324971 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gmldm" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.325268 4925 scope.go:117] "RemoveContainer" containerID="cd4ce0781ba4db9241633b3a8fba846243cb389de8339e0a2bf348e9b9a12a51" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.327879 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5f7d" event={"ID":"c985f150-ec7d-4175-99a1-8fb775b7d7d9","Type":"ContainerDied","Data":"97b056fbbbbe7ca947e4512af405aa7efaabf4804e2c10d556e1733cd833b4db"} Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.327965 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5f7d" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.336520 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" event={"ID":"c8e6ecfa-3855-4fee-890a-2a88f84dc8a4","Type":"ContainerDied","Data":"972e8c6a4c6fe6201938c6548ce23dfa7b9bfb9749a084d652f03b29b34a50c6"} Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.336590 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kn82k" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.342909 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xf5qm" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.342906 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xf5qm" event={"ID":"51968f99-bd7d-4958-bb6f-ba8035b2e637","Type":"ContainerDied","Data":"949bb788edd4655f009729a5f3fb060b1bcc9d6241b312694343fc2857aff924"} Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.347203 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6p5nd" event={"ID":"7ed2f286-6b23-4789-9f42-9da9d276812e","Type":"ContainerStarted","Data":"ab8ff79f780bc65af55bfcc881752eed10424824e53f534fc407204b1c40183c"} Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.347254 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6p5nd" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.347264 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6p5nd" event={"ID":"7ed2f286-6b23-4789-9f42-9da9d276812e","Type":"ContainerStarted","Data":"2923025e378dc11bb72ec80fb00cba2210b2e86267fbf0861fc08082d38cdf22"} Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.349959 4925 scope.go:117] "RemoveContainer" containerID="a2c5cd00b90d42f2084e1e80d4ac37c40974b301a0275f764a6ff8f01e375570" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.350737 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcc4h" event={"ID":"0f7aa95c-3861-48ab-a30f-0301aad169d7","Type":"ContainerDied","Data":"a3c11c398fabe8655d99143af21a81855162257d3a530761add9d99c2d0c3d7c"} Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.350831 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gcc4h" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.351557 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6p5nd" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.351837 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5f7d"] Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.357421 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5f7d"] Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.364781 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xf5qm"] Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.369049 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xf5qm"] Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.383385 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6p5nd" podStartSLOduration=2.383368106 podStartE2EDuration="2.383368106s" podCreationTimestamp="2026-02-02 11:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:03:19.381940637 +0000 UTC m=+376.386189619" watchObservedRunningTime="2026-02-02 11:03:19.383368106 +0000 UTC m=+376.387617068" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.394159 4925 scope.go:117] "RemoveContainer" containerID="487b68362beb854baee62a204f53cff077d9de711edea8ce1adff2a2ebb95c97" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.427204 4925 scope.go:117] "RemoveContainer" containerID="377df476e9567222ae4dcfcf4311f03b6963952208edb4c6b264d25f405679eb" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.435691 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kn82k"] Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.451519 4925 scope.go:117] "RemoveContainer" containerID="acca7a6ecd455f83f22c003267dbabef1b98c06c702784748b8f4c6430438a1c" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.452519 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kn82k"] Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.473960 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gcc4h"] Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.474768 4925 scope.go:117] "RemoveContainer" containerID="a94b4ffd487b418c228342c2c2f7b298d7aded11f5295dec99ca94f10b9a3369" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.477613 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gcc4h"] Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.481326 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gmldm"] Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.484859 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gmldm"] Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.490268 4925 scope.go:117] "RemoveContainer" containerID="edbdfd92eab9ae719ac07ecfa2fec52db673f349de35e1d9801518852f6d3afa" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.504531 4925 scope.go:117] "RemoveContainer" containerID="b400fc8915bc867126cef5aca6f8d1dbf6fee7279269bcc3d6a6a6d09b9862e9" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.517230 4925 scope.go:117] "RemoveContainer" containerID="f3c5b65c4554f93c02ac03604f372afb150aeb1541852eb3444550ae110cb15e" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.530575 4925 scope.go:117] "RemoveContainer" containerID="11a3196a80762f3d900ab6b764b740d5ce55b40b9e3f51e9b99f444da7f31e72" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.544630 4925 scope.go:117] "RemoveContainer" containerID="2be20294d9fa6bfab7e85cd7ffe04f972c8c7637fa4d7064e8757ee89f751818" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.559250 4925 scope.go:117] "RemoveContainer" containerID="3f7259dfd4e49bbdfd5394ffc09a348322a3cc96788c3f0d56713eece6862da9" Feb 02 11:03:19 crc kubenswrapper[4925]: I0202 11:03:19.574490 4925 scope.go:117] "RemoveContainer" containerID="41fb91a2f7cbf37ce3700bf89e695299d0b0769ddc8bcf08b960a10f259eeb5e" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.384045 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5w4mz"] Feb 02 11:03:20 crc kubenswrapper[4925]: E0202 11:03:20.386580 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e6ecfa-3855-4fee-890a-2a88f84dc8a4" containerName="marketplace-operator" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.386607 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e6ecfa-3855-4fee-890a-2a88f84dc8a4" containerName="marketplace-operator" Feb 02 11:03:20 crc kubenswrapper[4925]: E0202 11:03:20.386630 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51968f99-bd7d-4958-bb6f-ba8035b2e637" containerName="registry-server" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.386644 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="51968f99-bd7d-4958-bb6f-ba8035b2e637" containerName="registry-server" Feb 02 11:03:20 crc kubenswrapper[4925]: E0202 11:03:20.386664 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e6ecfa-3855-4fee-890a-2a88f84dc8a4" containerName="marketplace-operator" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.386678 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e6ecfa-3855-4fee-890a-2a88f84dc8a4" containerName="marketplace-operator" Feb 02 11:03:20 crc kubenswrapper[4925]: E0202 11:03:20.386701 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01fd158-f4e2-4ec8-953b-12dae9c49dd7" containerName="extract-content" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.386714 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01fd158-f4e2-4ec8-953b-12dae9c49dd7" containerName="extract-content" Feb 02 11:03:20 crc kubenswrapper[4925]: E0202 11:03:20.387847 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01fd158-f4e2-4ec8-953b-12dae9c49dd7" containerName="registry-server" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.387880 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01fd158-f4e2-4ec8-953b-12dae9c49dd7" containerName="registry-server" Feb 02 11:03:20 crc kubenswrapper[4925]: E0202 11:03:20.387898 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c985f150-ec7d-4175-99a1-8fb775b7d7d9" containerName="registry-server" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.387911 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="c985f150-ec7d-4175-99a1-8fb775b7d7d9" containerName="registry-server" Feb 02 11:03:20 crc kubenswrapper[4925]: E0202 11:03:20.387975 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c985f150-ec7d-4175-99a1-8fb775b7d7d9" containerName="extract-utilities" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.387988 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="c985f150-ec7d-4175-99a1-8fb775b7d7d9" containerName="extract-utilities" Feb 02 11:03:20 crc kubenswrapper[4925]: E0202 11:03:20.388005 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51968f99-bd7d-4958-bb6f-ba8035b2e637" containerName="extract-utilities" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.388017 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="51968f99-bd7d-4958-bb6f-ba8035b2e637" containerName="extract-utilities" Feb 02 11:03:20 crc kubenswrapper[4925]: E0202 11:03:20.388031 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f7aa95c-3861-48ab-a30f-0301aad169d7" containerName="extract-content" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.388041 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f7aa95c-3861-48ab-a30f-0301aad169d7" containerName="extract-content" Feb 02 11:03:20 crc kubenswrapper[4925]: E0202 11:03:20.388053 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c985f150-ec7d-4175-99a1-8fb775b7d7d9" containerName="extract-content" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.388063 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="c985f150-ec7d-4175-99a1-8fb775b7d7d9" containerName="extract-content" Feb 02 11:03:20 crc kubenswrapper[4925]: E0202 11:03:20.388102 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01fd158-f4e2-4ec8-953b-12dae9c49dd7" containerName="extract-utilities" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.388113 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01fd158-f4e2-4ec8-953b-12dae9c49dd7" containerName="extract-utilities" Feb 02 11:03:20 crc kubenswrapper[4925]: E0202 11:03:20.388130 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f7aa95c-3861-48ab-a30f-0301aad169d7" containerName="extract-utilities" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.388140 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f7aa95c-3861-48ab-a30f-0301aad169d7" containerName="extract-utilities" Feb 02 11:03:20 crc kubenswrapper[4925]: E0202 11:03:20.388152 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51968f99-bd7d-4958-bb6f-ba8035b2e637" containerName="extract-content" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.388162 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="51968f99-bd7d-4958-bb6f-ba8035b2e637" containerName="extract-content" Feb 02 11:03:20 crc kubenswrapper[4925]: E0202 11:03:20.388173 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f7aa95c-3861-48ab-a30f-0301aad169d7" containerName="registry-server" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.388184 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f7aa95c-3861-48ab-a30f-0301aad169d7" containerName="registry-server" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.388364 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="51968f99-bd7d-4958-bb6f-ba8035b2e637" containerName="registry-server" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.388383 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e6ecfa-3855-4fee-890a-2a88f84dc8a4" containerName="marketplace-operator" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.388398 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="b01fd158-f4e2-4ec8-953b-12dae9c49dd7" containerName="registry-server" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.388413 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f7aa95c-3861-48ab-a30f-0301aad169d7" containerName="registry-server" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.388436 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="c985f150-ec7d-4175-99a1-8fb775b7d7d9" containerName="registry-server" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.388749 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e6ecfa-3855-4fee-890a-2a88f84dc8a4" containerName="marketplace-operator" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.389596 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5w4mz" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.394377 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.394515 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5w4mz"] Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.492159 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f59ff6-4459-41ea-ab79-373c701ffcc3-utilities\") pod \"redhat-marketplace-5w4mz\" (UID: \"25f59ff6-4459-41ea-ab79-373c701ffcc3\") " pod="openshift-marketplace/redhat-marketplace-5w4mz" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.492229 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f59ff6-4459-41ea-ab79-373c701ffcc3-catalog-content\") pod \"redhat-marketplace-5w4mz\" (UID: \"25f59ff6-4459-41ea-ab79-373c701ffcc3\") " pod="openshift-marketplace/redhat-marketplace-5w4mz" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.492436 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89x6d\" (UniqueName: \"kubernetes.io/projected/25f59ff6-4459-41ea-ab79-373c701ffcc3-kube-api-access-89x6d\") pod \"redhat-marketplace-5w4mz\" (UID: \"25f59ff6-4459-41ea-ab79-373c701ffcc3\") " pod="openshift-marketplace/redhat-marketplace-5w4mz" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.578124 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2f6j4"] Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.579288 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2f6j4" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.586284 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.589114 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2f6j4"] Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.593682 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f59ff6-4459-41ea-ab79-373c701ffcc3-utilities\") pod \"redhat-marketplace-5w4mz\" (UID: \"25f59ff6-4459-41ea-ab79-373c701ffcc3\") " pod="openshift-marketplace/redhat-marketplace-5w4mz" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.593736 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23cca3fd-3790-4add-a724-50721c42fe9d-catalog-content\") pod \"certified-operators-2f6j4\" (UID: \"23cca3fd-3790-4add-a724-50721c42fe9d\") " pod="openshift-marketplace/certified-operators-2f6j4" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.593756 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f59ff6-4459-41ea-ab79-373c701ffcc3-catalog-content\") pod \"redhat-marketplace-5w4mz\" (UID: \"25f59ff6-4459-41ea-ab79-373c701ffcc3\") " pod="openshift-marketplace/redhat-marketplace-5w4mz" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.593778 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94rjm\" (UniqueName: \"kubernetes.io/projected/23cca3fd-3790-4add-a724-50721c42fe9d-kube-api-access-94rjm\") pod \"certified-operators-2f6j4\" (UID: \"23cca3fd-3790-4add-a724-50721c42fe9d\") " pod="openshift-marketplace/certified-operators-2f6j4" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.593804 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23cca3fd-3790-4add-a724-50721c42fe9d-utilities\") pod \"certified-operators-2f6j4\" (UID: \"23cca3fd-3790-4add-a724-50721c42fe9d\") " pod="openshift-marketplace/certified-operators-2f6j4" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.593837 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89x6d\" (UniqueName: \"kubernetes.io/projected/25f59ff6-4459-41ea-ab79-373c701ffcc3-kube-api-access-89x6d\") pod \"redhat-marketplace-5w4mz\" (UID: \"25f59ff6-4459-41ea-ab79-373c701ffcc3\") " pod="openshift-marketplace/redhat-marketplace-5w4mz" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.594310 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f59ff6-4459-41ea-ab79-373c701ffcc3-catalog-content\") pod \"redhat-marketplace-5w4mz\" (UID: \"25f59ff6-4459-41ea-ab79-373c701ffcc3\") " pod="openshift-marketplace/redhat-marketplace-5w4mz" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.594398 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f59ff6-4459-41ea-ab79-373c701ffcc3-utilities\") pod \"redhat-marketplace-5w4mz\" (UID: \"25f59ff6-4459-41ea-ab79-373c701ffcc3\") " pod="openshift-marketplace/redhat-marketplace-5w4mz" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.613692 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89x6d\" (UniqueName: \"kubernetes.io/projected/25f59ff6-4459-41ea-ab79-373c701ffcc3-kube-api-access-89x6d\") pod \"redhat-marketplace-5w4mz\" (UID: \"25f59ff6-4459-41ea-ab79-373c701ffcc3\") " pod="openshift-marketplace/redhat-marketplace-5w4mz" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.670887 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f7aa95c-3861-48ab-a30f-0301aad169d7" path="/var/lib/kubelet/pods/0f7aa95c-3861-48ab-a30f-0301aad169d7/volumes" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.671483 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51968f99-bd7d-4958-bb6f-ba8035b2e637" path="/var/lib/kubelet/pods/51968f99-bd7d-4958-bb6f-ba8035b2e637/volumes" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.672044 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b01fd158-f4e2-4ec8-953b-12dae9c49dd7" path="/var/lib/kubelet/pods/b01fd158-f4e2-4ec8-953b-12dae9c49dd7/volumes" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.673261 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8e6ecfa-3855-4fee-890a-2a88f84dc8a4" path="/var/lib/kubelet/pods/c8e6ecfa-3855-4fee-890a-2a88f84dc8a4/volumes" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.673825 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c985f150-ec7d-4175-99a1-8fb775b7d7d9" path="/var/lib/kubelet/pods/c985f150-ec7d-4175-99a1-8fb775b7d7d9/volumes" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.694672 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23cca3fd-3790-4add-a724-50721c42fe9d-catalog-content\") pod \"certified-operators-2f6j4\" (UID: \"23cca3fd-3790-4add-a724-50721c42fe9d\") " pod="openshift-marketplace/certified-operators-2f6j4" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.694724 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94rjm\" (UniqueName: \"kubernetes.io/projected/23cca3fd-3790-4add-a724-50721c42fe9d-kube-api-access-94rjm\") pod \"certified-operators-2f6j4\" (UID: \"23cca3fd-3790-4add-a724-50721c42fe9d\") " pod="openshift-marketplace/certified-operators-2f6j4" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.694755 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23cca3fd-3790-4add-a724-50721c42fe9d-utilities\") pod \"certified-operators-2f6j4\" (UID: \"23cca3fd-3790-4add-a724-50721c42fe9d\") " pod="openshift-marketplace/certified-operators-2f6j4" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.695256 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23cca3fd-3790-4add-a724-50721c42fe9d-catalog-content\") pod \"certified-operators-2f6j4\" (UID: \"23cca3fd-3790-4add-a724-50721c42fe9d\") " pod="openshift-marketplace/certified-operators-2f6j4" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.695280 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23cca3fd-3790-4add-a724-50721c42fe9d-utilities\") pod \"certified-operators-2f6j4\" (UID: \"23cca3fd-3790-4add-a724-50721c42fe9d\") " pod="openshift-marketplace/certified-operators-2f6j4" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.712011 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94rjm\" (UniqueName: \"kubernetes.io/projected/23cca3fd-3790-4add-a724-50721c42fe9d-kube-api-access-94rjm\") pod \"certified-operators-2f6j4\" (UID: \"23cca3fd-3790-4add-a724-50721c42fe9d\") " pod="openshift-marketplace/certified-operators-2f6j4" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.740108 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5w4mz" Feb 02 11:03:20 crc kubenswrapper[4925]: I0202 11:03:20.901952 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2f6j4" Feb 02 11:03:21 crc kubenswrapper[4925]: I0202 11:03:21.119946 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5w4mz"] Feb 02 11:03:21 crc kubenswrapper[4925]: W0202 11:03:21.124827 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25f59ff6_4459_41ea_ab79_373c701ffcc3.slice/crio-8a18fa4f9691f908a817768466865e16c1a1eea77615a0be3f6eb305578b3cae WatchSource:0}: Error finding container 8a18fa4f9691f908a817768466865e16c1a1eea77615a0be3f6eb305578b3cae: Status 404 returned error can't find the container with id 8a18fa4f9691f908a817768466865e16c1a1eea77615a0be3f6eb305578b3cae Feb 02 11:03:21 crc kubenswrapper[4925]: I0202 11:03:21.264204 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2f6j4"] Feb 02 11:03:21 crc kubenswrapper[4925]: W0202 11:03:21.270671 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23cca3fd_3790_4add_a724_50721c42fe9d.slice/crio-d4c36fbff55874ceb238b2aee607d04e65c3324fc6c77dd1eeb37c8209029942 WatchSource:0}: Error finding container d4c36fbff55874ceb238b2aee607d04e65c3324fc6c77dd1eeb37c8209029942: Status 404 returned error can't find the container with id d4c36fbff55874ceb238b2aee607d04e65c3324fc6c77dd1eeb37c8209029942 Feb 02 11:03:21 crc kubenswrapper[4925]: I0202 11:03:21.369886 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f6j4" event={"ID":"23cca3fd-3790-4add-a724-50721c42fe9d","Type":"ContainerStarted","Data":"d4c36fbff55874ceb238b2aee607d04e65c3324fc6c77dd1eeb37c8209029942"} Feb 02 11:03:21 crc kubenswrapper[4925]: I0202 11:03:21.370682 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5w4mz" event={"ID":"25f59ff6-4459-41ea-ab79-373c701ffcc3","Type":"ContainerStarted","Data":"8a18fa4f9691f908a817768466865e16c1a1eea77615a0be3f6eb305578b3cae"} Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.378587 4925 generic.go:334] "Generic (PLEG): container finished" podID="25f59ff6-4459-41ea-ab79-373c701ffcc3" containerID="f13e6f9e3a23b8255de8891cb3c4a29d2825a41f12f69ed4d9ce3cae4380c8f1" exitCode=0 Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.379350 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5w4mz" event={"ID":"25f59ff6-4459-41ea-ab79-373c701ffcc3","Type":"ContainerDied","Data":"f13e6f9e3a23b8255de8891cb3c4a29d2825a41f12f69ed4d9ce3cae4380c8f1"} Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.382139 4925 generic.go:334] "Generic (PLEG): container finished" podID="23cca3fd-3790-4add-a724-50721c42fe9d" containerID="ccc7e0ea58fbd622b34e5764bde5539478fa8d47bef3a99ca754937c709e960f" exitCode=0 Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.382175 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f6j4" event={"ID":"23cca3fd-3790-4add-a724-50721c42fe9d","Type":"ContainerDied","Data":"ccc7e0ea58fbd622b34e5764bde5539478fa8d47bef3a99ca754937c709e960f"} Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.783456 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z4qt5"] Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.785296 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4qt5" Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.788124 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.790822 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z4qt5"] Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.829987 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5928e7ce-0012-48b1-9187-d35097e13692-utilities\") pod \"redhat-operators-z4qt5\" (UID: \"5928e7ce-0012-48b1-9187-d35097e13692\") " pod="openshift-marketplace/redhat-operators-z4qt5" Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.830105 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k54l\" (UniqueName: \"kubernetes.io/projected/5928e7ce-0012-48b1-9187-d35097e13692-kube-api-access-8k54l\") pod \"redhat-operators-z4qt5\" (UID: \"5928e7ce-0012-48b1-9187-d35097e13692\") " pod="openshift-marketplace/redhat-operators-z4qt5" Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.830154 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5928e7ce-0012-48b1-9187-d35097e13692-catalog-content\") pod \"redhat-operators-z4qt5\" (UID: \"5928e7ce-0012-48b1-9187-d35097e13692\") " pod="openshift-marketplace/redhat-operators-z4qt5" Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.931142 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k54l\" (UniqueName: \"kubernetes.io/projected/5928e7ce-0012-48b1-9187-d35097e13692-kube-api-access-8k54l\") pod \"redhat-operators-z4qt5\" (UID: \"5928e7ce-0012-48b1-9187-d35097e13692\") " pod="openshift-marketplace/redhat-operators-z4qt5" Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.931215 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5928e7ce-0012-48b1-9187-d35097e13692-catalog-content\") pod \"redhat-operators-z4qt5\" (UID: \"5928e7ce-0012-48b1-9187-d35097e13692\") " pod="openshift-marketplace/redhat-operators-z4qt5" Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.931266 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5928e7ce-0012-48b1-9187-d35097e13692-utilities\") pod \"redhat-operators-z4qt5\" (UID: \"5928e7ce-0012-48b1-9187-d35097e13692\") " pod="openshift-marketplace/redhat-operators-z4qt5" Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.931735 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5928e7ce-0012-48b1-9187-d35097e13692-utilities\") pod \"redhat-operators-z4qt5\" (UID: \"5928e7ce-0012-48b1-9187-d35097e13692\") " pod="openshift-marketplace/redhat-operators-z4qt5" Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.931826 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5928e7ce-0012-48b1-9187-d35097e13692-catalog-content\") pod \"redhat-operators-z4qt5\" (UID: \"5928e7ce-0012-48b1-9187-d35097e13692\") " pod="openshift-marketplace/redhat-operators-z4qt5" Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.958921 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k54l\" (UniqueName: \"kubernetes.io/projected/5928e7ce-0012-48b1-9187-d35097e13692-kube-api-access-8k54l\") pod \"redhat-operators-z4qt5\" (UID: \"5928e7ce-0012-48b1-9187-d35097e13692\") " pod="openshift-marketplace/redhat-operators-z4qt5" Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.982541 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x6r7t"] Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.984148 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6r7t" Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.986179 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 11:03:22 crc kubenswrapper[4925]: I0202 11:03:22.999181 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x6r7t"] Feb 02 11:03:23 crc kubenswrapper[4925]: I0202 11:03:23.032379 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7094aa75-75ce-4d8b-b2be-dd34f846d5fe-utilities\") pod \"community-operators-x6r7t\" (UID: \"7094aa75-75ce-4d8b-b2be-dd34f846d5fe\") " pod="openshift-marketplace/community-operators-x6r7t" Feb 02 11:03:23 crc kubenswrapper[4925]: I0202 11:03:23.032430 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsrbf\" (UniqueName: \"kubernetes.io/projected/7094aa75-75ce-4d8b-b2be-dd34f846d5fe-kube-api-access-dsrbf\") pod \"community-operators-x6r7t\" (UID: \"7094aa75-75ce-4d8b-b2be-dd34f846d5fe\") " pod="openshift-marketplace/community-operators-x6r7t" Feb 02 11:03:23 crc kubenswrapper[4925]: I0202 11:03:23.032459 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7094aa75-75ce-4d8b-b2be-dd34f846d5fe-catalog-content\") pod \"community-operators-x6r7t\" (UID: \"7094aa75-75ce-4d8b-b2be-dd34f846d5fe\") " pod="openshift-marketplace/community-operators-x6r7t" Feb 02 11:03:23 crc kubenswrapper[4925]: I0202 11:03:23.117417 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4qt5" Feb 02 11:03:23 crc kubenswrapper[4925]: I0202 11:03:23.135659 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7094aa75-75ce-4d8b-b2be-dd34f846d5fe-utilities\") pod \"community-operators-x6r7t\" (UID: \"7094aa75-75ce-4d8b-b2be-dd34f846d5fe\") " pod="openshift-marketplace/community-operators-x6r7t" Feb 02 11:03:23 crc kubenswrapper[4925]: I0202 11:03:23.135698 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsrbf\" (UniqueName: \"kubernetes.io/projected/7094aa75-75ce-4d8b-b2be-dd34f846d5fe-kube-api-access-dsrbf\") pod \"community-operators-x6r7t\" (UID: \"7094aa75-75ce-4d8b-b2be-dd34f846d5fe\") " pod="openshift-marketplace/community-operators-x6r7t" Feb 02 11:03:23 crc kubenswrapper[4925]: I0202 11:03:23.135725 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7094aa75-75ce-4d8b-b2be-dd34f846d5fe-catalog-content\") pod \"community-operators-x6r7t\" (UID: \"7094aa75-75ce-4d8b-b2be-dd34f846d5fe\") " pod="openshift-marketplace/community-operators-x6r7t" Feb 02 11:03:23 crc kubenswrapper[4925]: I0202 11:03:23.136246 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7094aa75-75ce-4d8b-b2be-dd34f846d5fe-catalog-content\") pod \"community-operators-x6r7t\" (UID: \"7094aa75-75ce-4d8b-b2be-dd34f846d5fe\") " pod="openshift-marketplace/community-operators-x6r7t" Feb 02 11:03:23 crc kubenswrapper[4925]: I0202 11:03:23.136490 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7094aa75-75ce-4d8b-b2be-dd34f846d5fe-utilities\") pod \"community-operators-x6r7t\" (UID: \"7094aa75-75ce-4d8b-b2be-dd34f846d5fe\") " pod="openshift-marketplace/community-operators-x6r7t" Feb 02 11:03:23 crc kubenswrapper[4925]: I0202 11:03:23.153200 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsrbf\" (UniqueName: \"kubernetes.io/projected/7094aa75-75ce-4d8b-b2be-dd34f846d5fe-kube-api-access-dsrbf\") pod \"community-operators-x6r7t\" (UID: \"7094aa75-75ce-4d8b-b2be-dd34f846d5fe\") " pod="openshift-marketplace/community-operators-x6r7t" Feb 02 11:03:23 crc kubenswrapper[4925]: I0202 11:03:23.313672 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6r7t" Feb 02 11:03:23 crc kubenswrapper[4925]: I0202 11:03:23.354013 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z4qt5"] Feb 02 11:03:23 crc kubenswrapper[4925]: W0202 11:03:23.374401 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5928e7ce_0012_48b1_9187_d35097e13692.slice/crio-49a1cf452e64ba7c90ab3f353dee08eb4dd53861b98197cafb2be9476a434772 WatchSource:0}: Error finding container 49a1cf452e64ba7c90ab3f353dee08eb4dd53861b98197cafb2be9476a434772: Status 404 returned error can't find the container with id 49a1cf452e64ba7c90ab3f353dee08eb4dd53861b98197cafb2be9476a434772 Feb 02 11:03:23 crc kubenswrapper[4925]: I0202 11:03:23.395305 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f6j4" event={"ID":"23cca3fd-3790-4add-a724-50721c42fe9d","Type":"ContainerStarted","Data":"559a18c383751e03d5f67b9df18f31e85e93c282813a3247df2bf292be780427"} Feb 02 11:03:23 crc kubenswrapper[4925]: I0202 11:03:23.397619 4925 generic.go:334] "Generic (PLEG): container finished" podID="25f59ff6-4459-41ea-ab79-373c701ffcc3" containerID="1ec51445bdfca0500344207ea467bb8ef39cfdd6bf6b723ef2e67684291f1699" exitCode=0 Feb 02 11:03:23 crc kubenswrapper[4925]: I0202 11:03:23.397697 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5w4mz" event={"ID":"25f59ff6-4459-41ea-ab79-373c701ffcc3","Type":"ContainerDied","Data":"1ec51445bdfca0500344207ea467bb8ef39cfdd6bf6b723ef2e67684291f1699"} Feb 02 11:03:23 crc kubenswrapper[4925]: I0202 11:03:23.403066 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4qt5" event={"ID":"5928e7ce-0012-48b1-9187-d35097e13692","Type":"ContainerStarted","Data":"49a1cf452e64ba7c90ab3f353dee08eb4dd53861b98197cafb2be9476a434772"} Feb 02 11:03:23 crc kubenswrapper[4925]: I0202 11:03:23.734391 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x6r7t"] Feb 02 11:03:23 crc kubenswrapper[4925]: W0202 11:03:23.740542 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7094aa75_75ce_4d8b_b2be_dd34f846d5fe.slice/crio-7243148ce3e1b2bbd62b1a2d46a02cbc63d1d5ab2d0bd7f7c2194eb7f2687508 WatchSource:0}: Error finding container 7243148ce3e1b2bbd62b1a2d46a02cbc63d1d5ab2d0bd7f7c2194eb7f2687508: Status 404 returned error can't find the container with id 7243148ce3e1b2bbd62b1a2d46a02cbc63d1d5ab2d0bd7f7c2194eb7f2687508 Feb 02 11:03:24 crc kubenswrapper[4925]: I0202 11:03:24.409504 4925 generic.go:334] "Generic (PLEG): container finished" podID="23cca3fd-3790-4add-a724-50721c42fe9d" containerID="559a18c383751e03d5f67b9df18f31e85e93c282813a3247df2bf292be780427" exitCode=0 Feb 02 11:03:24 crc kubenswrapper[4925]: I0202 11:03:24.409592 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f6j4" event={"ID":"23cca3fd-3790-4add-a724-50721c42fe9d","Type":"ContainerDied","Data":"559a18c383751e03d5f67b9df18f31e85e93c282813a3247df2bf292be780427"} Feb 02 11:03:24 crc kubenswrapper[4925]: I0202 11:03:24.412670 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5w4mz" event={"ID":"25f59ff6-4459-41ea-ab79-373c701ffcc3","Type":"ContainerStarted","Data":"b84e73afba762e085dd90eba88b7532944086f50799072604c2908c75ea7066e"} Feb 02 11:03:24 crc kubenswrapper[4925]: I0202 11:03:24.414049 4925 generic.go:334] "Generic (PLEG): container finished" podID="7094aa75-75ce-4d8b-b2be-dd34f846d5fe" containerID="c9d815cd17858c36d5c7b65f902e87e46a7a293ce79053347ea5dc0069737892" exitCode=0 Feb 02 11:03:24 crc kubenswrapper[4925]: I0202 11:03:24.414112 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6r7t" event={"ID":"7094aa75-75ce-4d8b-b2be-dd34f846d5fe","Type":"ContainerDied","Data":"c9d815cd17858c36d5c7b65f902e87e46a7a293ce79053347ea5dc0069737892"} Feb 02 11:03:24 crc kubenswrapper[4925]: I0202 11:03:24.414133 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6r7t" event={"ID":"7094aa75-75ce-4d8b-b2be-dd34f846d5fe","Type":"ContainerStarted","Data":"7243148ce3e1b2bbd62b1a2d46a02cbc63d1d5ab2d0bd7f7c2194eb7f2687508"} Feb 02 11:03:24 crc kubenswrapper[4925]: I0202 11:03:24.416586 4925 generic.go:334] "Generic (PLEG): container finished" podID="5928e7ce-0012-48b1-9187-d35097e13692" containerID="6a46483388750080f50d28c9ea5298a2f434db9083b54b6fd8a50240637a750b" exitCode=0 Feb 02 11:03:24 crc kubenswrapper[4925]: I0202 11:03:24.416620 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4qt5" event={"ID":"5928e7ce-0012-48b1-9187-d35097e13692","Type":"ContainerDied","Data":"6a46483388750080f50d28c9ea5298a2f434db9083b54b6fd8a50240637a750b"} Feb 02 11:03:24 crc kubenswrapper[4925]: I0202 11:03:24.445721 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5w4mz" podStartSLOduration=2.8231162960000002 podStartE2EDuration="4.44570287s" podCreationTimestamp="2026-02-02 11:03:20 +0000 UTC" firstStartedPulling="2026-02-02 11:03:22.3806954 +0000 UTC m=+379.384944362" lastFinishedPulling="2026-02-02 11:03:24.003281974 +0000 UTC m=+381.007530936" observedRunningTime="2026-02-02 11:03:24.443738906 +0000 UTC m=+381.447987918" watchObservedRunningTime="2026-02-02 11:03:24.44570287 +0000 UTC m=+381.449951842" Feb 02 11:03:25 crc kubenswrapper[4925]: I0202 11:03:25.424386 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4qt5" event={"ID":"5928e7ce-0012-48b1-9187-d35097e13692","Type":"ContainerStarted","Data":"379300a7a90052182fdf68a5e206deb96fc27a15f91b46ee5b0d066e71d1679a"} Feb 02 11:03:25 crc kubenswrapper[4925]: I0202 11:03:25.426658 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f6j4" event={"ID":"23cca3fd-3790-4add-a724-50721c42fe9d","Type":"ContainerStarted","Data":"c6292839bd1f16ed1d823841488ab3caf75c8fab9ba647b1b4ba0ed813406f33"} Feb 02 11:03:25 crc kubenswrapper[4925]: I0202 11:03:25.428736 4925 generic.go:334] "Generic (PLEG): container finished" podID="7094aa75-75ce-4d8b-b2be-dd34f846d5fe" containerID="1ac5a48653bc847b2d79ef523eaeef5f662eedb50615f35caa2a6d8f48f7e1d2" exitCode=0 Feb 02 11:03:25 crc kubenswrapper[4925]: I0202 11:03:25.430424 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6r7t" event={"ID":"7094aa75-75ce-4d8b-b2be-dd34f846d5fe","Type":"ContainerDied","Data":"1ac5a48653bc847b2d79ef523eaeef5f662eedb50615f35caa2a6d8f48f7e1d2"} Feb 02 11:03:25 crc kubenswrapper[4925]: I0202 11:03:25.494743 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2f6j4" podStartSLOduration=3.002784948 podStartE2EDuration="5.494727105s" podCreationTimestamp="2026-02-02 11:03:20 +0000 UTC" firstStartedPulling="2026-02-02 11:03:22.3846484 +0000 UTC m=+379.388897362" lastFinishedPulling="2026-02-02 11:03:24.876590557 +0000 UTC m=+381.880839519" observedRunningTime="2026-02-02 11:03:25.489260563 +0000 UTC m=+382.493509545" watchObservedRunningTime="2026-02-02 11:03:25.494727105 +0000 UTC m=+382.498976067" Feb 02 11:03:26 crc kubenswrapper[4925]: I0202 11:03:26.437614 4925 generic.go:334] "Generic (PLEG): container finished" podID="5928e7ce-0012-48b1-9187-d35097e13692" containerID="379300a7a90052182fdf68a5e206deb96fc27a15f91b46ee5b0d066e71d1679a" exitCode=0 Feb 02 11:03:26 crc kubenswrapper[4925]: I0202 11:03:26.437755 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4qt5" event={"ID":"5928e7ce-0012-48b1-9187-d35097e13692","Type":"ContainerDied","Data":"379300a7a90052182fdf68a5e206deb96fc27a15f91b46ee5b0d066e71d1679a"} Feb 02 11:03:27 crc kubenswrapper[4925]: I0202 11:03:27.444795 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6r7t" event={"ID":"7094aa75-75ce-4d8b-b2be-dd34f846d5fe","Type":"ContainerStarted","Data":"f86b3aba5594ce5efe01ac3bdf4d3f77d138be5ed9985103d2c0aae6461c319d"} Feb 02 11:03:27 crc kubenswrapper[4925]: I0202 11:03:27.447246 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4qt5" event={"ID":"5928e7ce-0012-48b1-9187-d35097e13692","Type":"ContainerStarted","Data":"7327a1a30b7e7019c81d47199f0407731adcebacc694245486cd92beb6d6283a"} Feb 02 11:03:27 crc kubenswrapper[4925]: I0202 11:03:27.463357 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x6r7t" podStartSLOduration=2.866519128 podStartE2EDuration="5.463337749s" podCreationTimestamp="2026-02-02 11:03:22 +0000 UTC" firstStartedPulling="2026-02-02 11:03:24.415322217 +0000 UTC m=+381.419571219" lastFinishedPulling="2026-02-02 11:03:27.012140878 +0000 UTC m=+384.016389840" observedRunningTime="2026-02-02 11:03:27.461094547 +0000 UTC m=+384.465343509" watchObservedRunningTime="2026-02-02 11:03:27.463337749 +0000 UTC m=+384.467586711" Feb 02 11:03:27 crc kubenswrapper[4925]: I0202 11:03:27.478278 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z4qt5" podStartSLOduration=3.010763634 podStartE2EDuration="5.478251753s" podCreationTimestamp="2026-02-02 11:03:22 +0000 UTC" firstStartedPulling="2026-02-02 11:03:24.41903854 +0000 UTC m=+381.423287552" lastFinishedPulling="2026-02-02 11:03:26.886526709 +0000 UTC m=+383.890775671" observedRunningTime="2026-02-02 11:03:27.476248328 +0000 UTC m=+384.480497300" watchObservedRunningTime="2026-02-02 11:03:27.478251753 +0000 UTC m=+384.482500715" Feb 02 11:03:30 crc kubenswrapper[4925]: I0202 11:03:30.741040 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5w4mz" Feb 02 11:03:30 crc kubenswrapper[4925]: I0202 11:03:30.741401 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5w4mz" Feb 02 11:03:30 crc kubenswrapper[4925]: I0202 11:03:30.795351 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5w4mz" Feb 02 11:03:30 crc kubenswrapper[4925]: I0202 11:03:30.903044 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2f6j4" Feb 02 11:03:30 crc kubenswrapper[4925]: I0202 11:03:30.903140 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2f6j4" Feb 02 11:03:30 crc kubenswrapper[4925]: I0202 11:03:30.946375 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2f6j4" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.237326 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-56md8" podUID="421043e2-e94a-4b1b-8571-ea62b753b06d" containerName="registry" containerID="cri-o://b70bba44e5ebf00449a566da6858fccaab78682a3d12b02f822e6b90a9f107a8" gracePeriod=30 Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.472158 4925 generic.go:334] "Generic (PLEG): container finished" podID="421043e2-e94a-4b1b-8571-ea62b753b06d" containerID="b70bba44e5ebf00449a566da6858fccaab78682a3d12b02f822e6b90a9f107a8" exitCode=0 Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.472270 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-56md8" event={"ID":"421043e2-e94a-4b1b-8571-ea62b753b06d","Type":"ContainerDied","Data":"b70bba44e5ebf00449a566da6858fccaab78682a3d12b02f822e6b90a9f107a8"} Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.505760 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2f6j4" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.519232 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5w4mz" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.692236 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.748281 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/421043e2-e94a-4b1b-8571-ea62b753b06d-trusted-ca\") pod \"421043e2-e94a-4b1b-8571-ea62b753b06d\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.748617 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-bound-sa-token\") pod \"421043e2-e94a-4b1b-8571-ea62b753b06d\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.748680 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/421043e2-e94a-4b1b-8571-ea62b753b06d-ca-trust-extracted\") pod \"421043e2-e94a-4b1b-8571-ea62b753b06d\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.748893 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"421043e2-e94a-4b1b-8571-ea62b753b06d\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.748955 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-registry-tls\") pod \"421043e2-e94a-4b1b-8571-ea62b753b06d\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.749049 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/421043e2-e94a-4b1b-8571-ea62b753b06d-registry-certificates\") pod \"421043e2-e94a-4b1b-8571-ea62b753b06d\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.749126 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/421043e2-e94a-4b1b-8571-ea62b753b06d-installation-pull-secrets\") pod \"421043e2-e94a-4b1b-8571-ea62b753b06d\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.749147 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ss5z\" (UniqueName: \"kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-kube-api-access-2ss5z\") pod \"421043e2-e94a-4b1b-8571-ea62b753b06d\" (UID: \"421043e2-e94a-4b1b-8571-ea62b753b06d\") " Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.749411 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/421043e2-e94a-4b1b-8571-ea62b753b06d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "421043e2-e94a-4b1b-8571-ea62b753b06d" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.751682 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/421043e2-e94a-4b1b-8571-ea62b753b06d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "421043e2-e94a-4b1b-8571-ea62b753b06d" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.754016 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "421043e2-e94a-4b1b-8571-ea62b753b06d" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.754443 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-kube-api-access-2ss5z" (OuterVolumeSpecName: "kube-api-access-2ss5z") pod "421043e2-e94a-4b1b-8571-ea62b753b06d" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d"). InnerVolumeSpecName "kube-api-access-2ss5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.756509 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/421043e2-e94a-4b1b-8571-ea62b753b06d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "421043e2-e94a-4b1b-8571-ea62b753b06d" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.764581 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "421043e2-e94a-4b1b-8571-ea62b753b06d" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.766980 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/421043e2-e94a-4b1b-8571-ea62b753b06d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "421043e2-e94a-4b1b-8571-ea62b753b06d" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.821184 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "421043e2-e94a-4b1b-8571-ea62b753b06d" (UID: "421043e2-e94a-4b1b-8571-ea62b753b06d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.850951 4925 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/421043e2-e94a-4b1b-8571-ea62b753b06d-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.850997 4925 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.851012 4925 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/421043e2-e94a-4b1b-8571-ea62b753b06d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.851024 4925 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.851036 4925 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/421043e2-e94a-4b1b-8571-ea62b753b06d-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.851049 4925 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/421043e2-e94a-4b1b-8571-ea62b753b06d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:31 crc kubenswrapper[4925]: I0202 11:03:31.851061 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ss5z\" (UniqueName: \"kubernetes.io/projected/421043e2-e94a-4b1b-8571-ea62b753b06d-kube-api-access-2ss5z\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:32 crc kubenswrapper[4925]: I0202 11:03:32.480467 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-56md8" event={"ID":"421043e2-e94a-4b1b-8571-ea62b753b06d","Type":"ContainerDied","Data":"275d46f12e397b980eb8e70892f4952d509d7a109a3c4133a784aa560b45e8c3"} Feb 02 11:03:32 crc kubenswrapper[4925]: I0202 11:03:32.480526 4925 scope.go:117] "RemoveContainer" containerID="b70bba44e5ebf00449a566da6858fccaab78682a3d12b02f822e6b90a9f107a8" Feb 02 11:03:32 crc kubenswrapper[4925]: I0202 11:03:32.480819 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-56md8" Feb 02 11:03:32 crc kubenswrapper[4925]: I0202 11:03:32.536023 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-56md8"] Feb 02 11:03:32 crc kubenswrapper[4925]: I0202 11:03:32.545101 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-56md8"] Feb 02 11:03:32 crc kubenswrapper[4925]: I0202 11:03:32.671058 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="421043e2-e94a-4b1b-8571-ea62b753b06d" path="/var/lib/kubelet/pods/421043e2-e94a-4b1b-8571-ea62b753b06d/volumes" Feb 02 11:03:33 crc kubenswrapper[4925]: I0202 11:03:33.118792 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z4qt5" Feb 02 11:03:33 crc kubenswrapper[4925]: I0202 11:03:33.118848 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z4qt5" Feb 02 11:03:33 crc kubenswrapper[4925]: I0202 11:03:33.169764 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z4qt5" Feb 02 11:03:33 crc kubenswrapper[4925]: I0202 11:03:33.314453 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x6r7t" Feb 02 11:03:33 crc kubenswrapper[4925]: I0202 11:03:33.314507 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x6r7t" Feb 02 11:03:33 crc kubenswrapper[4925]: I0202 11:03:33.364855 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x6r7t" Feb 02 11:03:33 crc kubenswrapper[4925]: I0202 11:03:33.522731 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z4qt5" Feb 02 11:03:33 crc kubenswrapper[4925]: I0202 11:03:33.523800 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x6r7t" Feb 02 11:03:43 crc kubenswrapper[4925]: I0202 11:03:43.398973 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:03:43 crc kubenswrapper[4925]: I0202 11:03:43.399600 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:04:13 crc kubenswrapper[4925]: I0202 11:04:13.398700 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:04:13 crc kubenswrapper[4925]: I0202 11:04:13.399176 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:04:13 crc kubenswrapper[4925]: I0202 11:04:13.399224 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 11:04:13 crc kubenswrapper[4925]: I0202 11:04:13.399717 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44bb0f6e97b3094fc1dd166bb55ea42a68acba564a9b407084dca96d96dbdd51"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:04:13 crc kubenswrapper[4925]: I0202 11:04:13.399773 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://44bb0f6e97b3094fc1dd166bb55ea42a68acba564a9b407084dca96d96dbdd51" gracePeriod=600 Feb 02 11:04:13 crc kubenswrapper[4925]: I0202 11:04:13.739359 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="44bb0f6e97b3094fc1dd166bb55ea42a68acba564a9b407084dca96d96dbdd51" exitCode=0 Feb 02 11:04:13 crc kubenswrapper[4925]: I0202 11:04:13.739450 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"44bb0f6e97b3094fc1dd166bb55ea42a68acba564a9b407084dca96d96dbdd51"} Feb 02 11:04:13 crc kubenswrapper[4925]: I0202 11:04:13.739710 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"34ce5b38806e94c52ae2e1827e7acb76781694c5c09b9303334780fe7804194c"} Feb 02 11:04:13 crc kubenswrapper[4925]: I0202 11:04:13.739728 4925 scope.go:117] "RemoveContainer" containerID="770611b03ba9a94ea3ea12af63083be9260a561402868a717e44a5158854ab48" Feb 02 11:06:13 crc kubenswrapper[4925]: I0202 11:06:13.399014 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:06:13 crc kubenswrapper[4925]: I0202 11:06:13.399729 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:06:43 crc kubenswrapper[4925]: I0202 11:06:43.407414 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:06:43 crc kubenswrapper[4925]: I0202 11:06:43.408211 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:07:13 crc kubenswrapper[4925]: I0202 11:07:13.399307 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:07:13 crc kubenswrapper[4925]: I0202 11:07:13.400037 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:07:13 crc kubenswrapper[4925]: I0202 11:07:13.400159 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 11:07:13 crc kubenswrapper[4925]: I0202 11:07:13.401001 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34ce5b38806e94c52ae2e1827e7acb76781694c5c09b9303334780fe7804194c"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:07:13 crc kubenswrapper[4925]: I0202 11:07:13.401127 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://34ce5b38806e94c52ae2e1827e7acb76781694c5c09b9303334780fe7804194c" gracePeriod=600 Feb 02 11:07:13 crc kubenswrapper[4925]: I0202 11:07:13.797347 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="34ce5b38806e94c52ae2e1827e7acb76781694c5c09b9303334780fe7804194c" exitCode=0 Feb 02 11:07:13 crc kubenswrapper[4925]: I0202 11:07:13.797452 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"34ce5b38806e94c52ae2e1827e7acb76781694c5c09b9303334780fe7804194c"} Feb 02 11:07:13 crc kubenswrapper[4925]: I0202 11:07:13.797702 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"ffca907841f0a5bec449b7e08e60cef6f7cea31a8df22b28332865ae60f507bc"} Feb 02 11:07:13 crc kubenswrapper[4925]: I0202 11:07:13.797730 4925 scope.go:117] "RemoveContainer" containerID="44bb0f6e97b3094fc1dd166bb55ea42a68acba564a9b407084dca96d96dbdd51" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.726841 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2l2bm"] Feb 02 11:08:23 crc kubenswrapper[4925]: E0202 11:08:23.727789 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="421043e2-e94a-4b1b-8571-ea62b753b06d" containerName="registry" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.727806 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="421043e2-e94a-4b1b-8571-ea62b753b06d" containerName="registry" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.727939 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="421043e2-e94a-4b1b-8571-ea62b753b06d" containerName="registry" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.728411 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2l2bm" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.731133 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.733043 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.733195 4925 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-5m8kz" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.735144 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-2kqvb"] Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.735875 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2kqvb" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.740626 4925 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-f56xf" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.745330 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2l2bm"] Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.755138 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-9bcc7"] Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.756220 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-9bcc7" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.759976 4925 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-56sff" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.764951 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2kqvb"] Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.769525 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-9bcc7"] Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.846676 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dp59\" (UniqueName: \"kubernetes.io/projected/06d7c0c7-2b68-478e-8113-abae661d30f6-kube-api-access-6dp59\") pod \"cert-manager-858654f9db-2kqvb\" (UID: \"06d7c0c7-2b68-478e-8113-abae661d30f6\") " pod="cert-manager/cert-manager-858654f9db-2kqvb" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.846739 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzpn4\" (UniqueName: \"kubernetes.io/projected/8a78553e-d9dd-4f70-a1c2-ae2ebd4d01bd-kube-api-access-hzpn4\") pod \"cert-manager-cainjector-cf98fcc89-2l2bm\" (UID: \"8a78553e-d9dd-4f70-a1c2-ae2ebd4d01bd\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2l2bm" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.947790 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88dxj\" (UniqueName: \"kubernetes.io/projected/b0cdbe98-e1d1-4844-a567-695916cc41f0-kube-api-access-88dxj\") pod \"cert-manager-webhook-687f57d79b-9bcc7\" (UID: \"b0cdbe98-e1d1-4844-a567-695916cc41f0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-9bcc7" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.947879 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dp59\" (UniqueName: \"kubernetes.io/projected/06d7c0c7-2b68-478e-8113-abae661d30f6-kube-api-access-6dp59\") pod \"cert-manager-858654f9db-2kqvb\" (UID: \"06d7c0c7-2b68-478e-8113-abae661d30f6\") " pod="cert-manager/cert-manager-858654f9db-2kqvb" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.947916 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzpn4\" (UniqueName: \"kubernetes.io/projected/8a78553e-d9dd-4f70-a1c2-ae2ebd4d01bd-kube-api-access-hzpn4\") pod \"cert-manager-cainjector-cf98fcc89-2l2bm\" (UID: \"8a78553e-d9dd-4f70-a1c2-ae2ebd4d01bd\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2l2bm" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.968648 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzpn4\" (UniqueName: \"kubernetes.io/projected/8a78553e-d9dd-4f70-a1c2-ae2ebd4d01bd-kube-api-access-hzpn4\") pod \"cert-manager-cainjector-cf98fcc89-2l2bm\" (UID: \"8a78553e-d9dd-4f70-a1c2-ae2ebd4d01bd\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2l2bm" Feb 02 11:08:23 crc kubenswrapper[4925]: I0202 11:08:23.982090 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dp59\" (UniqueName: \"kubernetes.io/projected/06d7c0c7-2b68-478e-8113-abae661d30f6-kube-api-access-6dp59\") pod \"cert-manager-858654f9db-2kqvb\" (UID: \"06d7c0c7-2b68-478e-8113-abae661d30f6\") " pod="cert-manager/cert-manager-858654f9db-2kqvb" Feb 02 11:08:24 crc kubenswrapper[4925]: I0202 11:08:24.049095 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2l2bm" Feb 02 11:08:24 crc kubenswrapper[4925]: I0202 11:08:24.049529 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88dxj\" (UniqueName: \"kubernetes.io/projected/b0cdbe98-e1d1-4844-a567-695916cc41f0-kube-api-access-88dxj\") pod \"cert-manager-webhook-687f57d79b-9bcc7\" (UID: \"b0cdbe98-e1d1-4844-a567-695916cc41f0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-9bcc7" Feb 02 11:08:24 crc kubenswrapper[4925]: I0202 11:08:24.061390 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2kqvb" Feb 02 11:08:24 crc kubenswrapper[4925]: I0202 11:08:24.067922 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88dxj\" (UniqueName: \"kubernetes.io/projected/b0cdbe98-e1d1-4844-a567-695916cc41f0-kube-api-access-88dxj\") pod \"cert-manager-webhook-687f57d79b-9bcc7\" (UID: \"b0cdbe98-e1d1-4844-a567-695916cc41f0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-9bcc7" Feb 02 11:08:24 crc kubenswrapper[4925]: I0202 11:08:24.074814 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-9bcc7" Feb 02 11:08:24 crc kubenswrapper[4925]: I0202 11:08:24.258336 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2l2bm"] Feb 02 11:08:24 crc kubenswrapper[4925]: I0202 11:08:24.272425 4925 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:08:24 crc kubenswrapper[4925]: I0202 11:08:24.356396 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-9bcc7"] Feb 02 11:08:24 crc kubenswrapper[4925]: W0202 11:08:24.362228 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0cdbe98_e1d1_4844_a567_695916cc41f0.slice/crio-9802fcfc9a63a1a2115a40e4bcd2adddfddf27a5c909c66b75d1c1b8dd0852c7 WatchSource:0}: Error finding container 9802fcfc9a63a1a2115a40e4bcd2adddfddf27a5c909c66b75d1c1b8dd0852c7: Status 404 returned error can't find the container with id 9802fcfc9a63a1a2115a40e4bcd2adddfddf27a5c909c66b75d1c1b8dd0852c7 Feb 02 11:08:24 crc kubenswrapper[4925]: I0202 11:08:24.511631 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2kqvb"] Feb 02 11:08:24 crc kubenswrapper[4925]: W0202 11:08:24.514375 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06d7c0c7_2b68_478e_8113_abae661d30f6.slice/crio-899e92ede2a08ced821117786d024c9171f4b55f8ce4f590c14cea6ca1ddd5a8 WatchSource:0}: Error finding container 899e92ede2a08ced821117786d024c9171f4b55f8ce4f590c14cea6ca1ddd5a8: Status 404 returned error can't find the container with id 899e92ede2a08ced821117786d024c9171f4b55f8ce4f590c14cea6ca1ddd5a8 Feb 02 11:08:25 crc kubenswrapper[4925]: I0202 11:08:25.223372 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2kqvb" event={"ID":"06d7c0c7-2b68-478e-8113-abae661d30f6","Type":"ContainerStarted","Data":"899e92ede2a08ced821117786d024c9171f4b55f8ce4f590c14cea6ca1ddd5a8"} Feb 02 11:08:25 crc kubenswrapper[4925]: I0202 11:08:25.224790 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2l2bm" event={"ID":"8a78553e-d9dd-4f70-a1c2-ae2ebd4d01bd","Type":"ContainerStarted","Data":"6b724a012252bdf81ed5ee9865c645e08a020a204f9ec3936827f894e9382915"} Feb 02 11:08:25 crc kubenswrapper[4925]: I0202 11:08:25.225966 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-9bcc7" event={"ID":"b0cdbe98-e1d1-4844-a567-695916cc41f0","Type":"ContainerStarted","Data":"9802fcfc9a63a1a2115a40e4bcd2adddfddf27a5c909c66b75d1c1b8dd0852c7"} Feb 02 11:08:28 crc kubenswrapper[4925]: I0202 11:08:28.240309 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2l2bm" event={"ID":"8a78553e-d9dd-4f70-a1c2-ae2ebd4d01bd","Type":"ContainerStarted","Data":"6e1ed84c0dfad6d10e89802c5236364e95683f72b690fafdb2230629c5daadab"} Feb 02 11:08:28 crc kubenswrapper[4925]: I0202 11:08:28.254984 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2l2bm" podStartSLOduration=2.333562889 podStartE2EDuration="5.25496628s" podCreationTimestamp="2026-02-02 11:08:23 +0000 UTC" firstStartedPulling="2026-02-02 11:08:24.272208211 +0000 UTC m=+681.276457163" lastFinishedPulling="2026-02-02 11:08:27.193611592 +0000 UTC m=+684.197860554" observedRunningTime="2026-02-02 11:08:28.251807335 +0000 UTC m=+685.256056307" watchObservedRunningTime="2026-02-02 11:08:28.25496628 +0000 UTC m=+685.259215242" Feb 02 11:08:30 crc kubenswrapper[4925]: I0202 11:08:30.255850 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2kqvb" event={"ID":"06d7c0c7-2b68-478e-8113-abae661d30f6","Type":"ContainerStarted","Data":"4fab802bc428283cea49ee5b8ffd0cc3bd20cc62c2e332fe254a15c0cbae43e7"} Feb 02 11:08:30 crc kubenswrapper[4925]: I0202 11:08:30.257611 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-9bcc7" event={"ID":"b0cdbe98-e1d1-4844-a567-695916cc41f0","Type":"ContainerStarted","Data":"e9a0ec9f1bc25223ebd90fabfd79378a268663155e6af8a9beef059acf329e96"} Feb 02 11:08:30 crc kubenswrapper[4925]: I0202 11:08:30.257778 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-9bcc7" Feb 02 11:08:30 crc kubenswrapper[4925]: I0202 11:08:30.273480 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-2kqvb" podStartSLOduration=2.161565248 podStartE2EDuration="7.273456728s" podCreationTimestamp="2026-02-02 11:08:23 +0000 UTC" firstStartedPulling="2026-02-02 11:08:24.517088532 +0000 UTC m=+681.521337494" lastFinishedPulling="2026-02-02 11:08:29.628980012 +0000 UTC m=+686.633228974" observedRunningTime="2026-02-02 11:08:30.270053046 +0000 UTC m=+687.274302008" watchObservedRunningTime="2026-02-02 11:08:30.273456728 +0000 UTC m=+687.277705690" Feb 02 11:08:30 crc kubenswrapper[4925]: I0202 11:08:30.334227 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-9bcc7" podStartSLOduration=2.073830197 podStartE2EDuration="7.334206061s" podCreationTimestamp="2026-02-02 11:08:23 +0000 UTC" firstStartedPulling="2026-02-02 11:08:24.364116526 +0000 UTC m=+681.368365488" lastFinishedPulling="2026-02-02 11:08:29.62449238 +0000 UTC m=+686.628741352" observedRunningTime="2026-02-02 11:08:30.33268397 +0000 UTC m=+687.336932942" watchObservedRunningTime="2026-02-02 11:08:30.334206061 +0000 UTC m=+687.338455023" Feb 02 11:08:34 crc kubenswrapper[4925]: I0202 11:08:34.078544 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-9bcc7" Feb 02 11:08:57 crc kubenswrapper[4925]: I0202 11:08:57.294298 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6rlpb"] Feb 02 11:08:57 crc kubenswrapper[4925]: I0202 11:08:57.295748 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovn-controller" containerID="cri-o://e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8" gracePeriod=30 Feb 02 11:08:57 crc kubenswrapper[4925]: I0202 11:08:57.296298 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="northd" containerID="cri-o://f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b" gracePeriod=30 Feb 02 11:08:57 crc kubenswrapper[4925]: I0202 11:08:57.296355 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e" gracePeriod=30 Feb 02 11:08:57 crc kubenswrapper[4925]: I0202 11:08:57.296449 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="nbdb" containerID="cri-o://40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9" gracePeriod=30 Feb 02 11:08:57 crc kubenswrapper[4925]: I0202 11:08:57.296430 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="sbdb" containerID="cri-o://4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810" gracePeriod=30 Feb 02 11:08:57 crc kubenswrapper[4925]: I0202 11:08:57.296487 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="kube-rbac-proxy-node" containerID="cri-o://a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5" gracePeriod=30 Feb 02 11:08:57 crc kubenswrapper[4925]: I0202 11:08:57.296594 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovn-acl-logging" containerID="cri-o://33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161" gracePeriod=30 Feb 02 11:08:57 crc kubenswrapper[4925]: I0202 11:08:57.331743 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovnkube-controller" containerID="cri-o://656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819" gracePeriod=30 Feb 02 11:08:57 crc kubenswrapper[4925]: I0202 11:08:57.998751 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovnkube-controller/3.log" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.000931 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovn-acl-logging/0.log" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.001336 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovn-controller/0.log" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.001775 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.020748 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a57c5d12-a4de-413c-a581-4b693550e8c3-ovn-node-metrics-cert\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.020804 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-var-lib-openvswitch\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.020840 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-ovn\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.020865 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-etc-openvswitch\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.020888 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-log-socket\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.020931 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-slash\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.020954 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-systemd\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.020983 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr96t\" (UniqueName: \"kubernetes.io/projected/a57c5d12-a4de-413c-a581-4b693550e8c3-kube-api-access-tr96t\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.021014 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-systemd-units\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.021033 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-cni-netd\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.021062 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.021125 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-kubelet\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.021151 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-env-overrides\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.021169 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-run-netns\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.021195 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-ovnkube-config\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.021321 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.021227 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-ovnkube-script-lib\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.021435 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-run-ovn-kubernetes\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.021462 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-cni-bin\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.021481 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-openvswitch\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.021514 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-node-log\") pod \"a57c5d12-a4de-413c-a581-4b693550e8c3\" (UID: \"a57c5d12-a4de-413c-a581-4b693550e8c3\") " Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.021886 4925 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.021922 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-node-log" (OuterVolumeSpecName: "node-log") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.021943 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.021991 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.022191 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.022304 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.022336 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.022322 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.022345 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.022364 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.022368 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.022367 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.022402 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-slash" (OuterVolumeSpecName: "host-slash") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.022403 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.022423 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-log-socket" (OuterVolumeSpecName: "log-socket") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.022543 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.022673 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.029431 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a57c5d12-a4de-413c-a581-4b693550e8c3-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.029876 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a57c5d12-a4de-413c-a581-4b693550e8c3-kube-api-access-tr96t" (OuterVolumeSpecName: "kube-api-access-tr96t") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "kube-api-access-tr96t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.041863 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a57c5d12-a4de-413c-a581-4b693550e8c3" (UID: "a57c5d12-a4de-413c-a581-4b693550e8c3"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.076571 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dzkbp"] Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.076811 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.076828 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.076844 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovnkube-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.076854 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovnkube-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.076868 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="sbdb" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.076888 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="sbdb" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.076898 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovnkube-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.076908 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovnkube-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.076922 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="nbdb" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.076930 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="nbdb" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.076945 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovn-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.076953 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovn-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.076963 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovn-acl-logging" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.076972 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovn-acl-logging" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.076982 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="northd" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.076990 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="northd" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.077004 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="kubecfg-setup" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.077013 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="kubecfg-setup" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.077023 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="kube-rbac-proxy-node" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.077032 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="kube-rbac-proxy-node" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.077051 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovnkube-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.077059 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovnkube-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.077196 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="kube-rbac-proxy-node" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.077207 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="northd" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.077218 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovnkube-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.077227 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.077238 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovnkube-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.077249 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovn-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.077261 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovnkube-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.077271 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="nbdb" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.077282 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovn-acl-logging" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.077292 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="sbdb" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.077303 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovnkube-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.077416 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovnkube-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.077425 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovnkube-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.077543 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovnkube-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.077669 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovnkube-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.077678 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerName="ovnkube-controller" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.081567 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.123746 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-slash\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.123831 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-node-log\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.123879 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-run-ovn-kubernetes\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.123920 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-systemd-units\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.123950 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-kubelet\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.123982 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6714b54f-a429-45cf-9521-501e3476a431-ovn-node-metrics-cert\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124011 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-run-netns\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124042 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-cni-netd\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124189 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-etc-openvswitch\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124223 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6714b54f-a429-45cf-9521-501e3476a431-ovnkube-config\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124257 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-log-socket\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124294 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6714b54f-a429-45cf-9521-501e3476a431-env-overrides\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124325 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-var-lib-openvswitch\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124456 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-run-ovn\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124567 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-cni-bin\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124617 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxlv4\" (UniqueName: \"kubernetes.io/projected/6714b54f-a429-45cf-9521-501e3476a431-kube-api-access-pxlv4\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124634 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-run-openvswitch\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124652 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124672 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-run-systemd\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124768 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6714b54f-a429-45cf-9521-501e3476a431-ovnkube-script-lib\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124895 4925 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124939 4925 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124967 4925 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.124991 4925 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.125013 4925 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-node-log\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.125035 4925 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a57c5d12-a4de-413c-a581-4b693550e8c3-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.125054 4925 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.125072 4925 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.125118 4925 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.125136 4925 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-log-socket\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.125153 4925 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-slash\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.125170 4925 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.125186 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr96t\" (UniqueName: \"kubernetes.io/projected/a57c5d12-a4de-413c-a581-4b693550e8c3-kube-api-access-tr96t\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.125203 4925 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.125219 4925 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.125236 4925 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.125252 4925 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.125268 4925 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a57c5d12-a4de-413c-a581-4b693550e8c3-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.125284 4925 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a57c5d12-a4de-413c-a581-4b693550e8c3-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226262 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-run-ovn-kubernetes\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226305 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-systemd-units\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226323 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-kubelet\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226338 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6714b54f-a429-45cf-9521-501e3476a431-ovn-node-metrics-cert\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226351 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-run-netns\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226364 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-cni-netd\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226382 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-etc-openvswitch\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226393 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-systemd-units\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226401 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6714b54f-a429-45cf-9521-501e3476a431-ovnkube-config\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226456 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-log-socket\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226469 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-run-ovn-kubernetes\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226456 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-kubelet\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226502 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-log-socket\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226510 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-cni-netd\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226481 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6714b54f-a429-45cf-9521-501e3476a431-env-overrides\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226724 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-var-lib-openvswitch\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226769 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-run-ovn\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226803 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-cni-bin\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226841 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-run-ovn\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226565 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-etc-openvswitch\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226857 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-cni-bin\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226482 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-run-netns\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226830 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxlv4\" (UniqueName: \"kubernetes.io/projected/6714b54f-a429-45cf-9521-501e3476a431-kube-api-access-pxlv4\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226928 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-run-openvswitch\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226896 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-var-lib-openvswitch\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226986 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-run-openvswitch\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.227008 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.227039 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6714b54f-a429-45cf-9521-501e3476a431-env-overrides\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.226975 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.227088 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-run-systemd\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.227116 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-run-systemd\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.227120 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6714b54f-a429-45cf-9521-501e3476a431-ovnkube-script-lib\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.227169 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-slash\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.227191 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6714b54f-a429-45cf-9521-501e3476a431-ovnkube-config\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.227201 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-node-log\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.227258 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-node-log\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.227326 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6714b54f-a429-45cf-9521-501e3476a431-host-slash\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.227662 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6714b54f-a429-45cf-9521-501e3476a431-ovnkube-script-lib\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.232987 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6714b54f-a429-45cf-9521-501e3476a431-ovn-node-metrics-cert\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.251786 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxlv4\" (UniqueName: \"kubernetes.io/projected/6714b54f-a429-45cf-9521-501e3476a431-kube-api-access-pxlv4\") pod \"ovnkube-node-dzkbp\" (UID: \"6714b54f-a429-45cf-9521-501e3476a431\") " pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.396052 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.421957 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovnkube-controller/3.log" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.424537 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovn-acl-logging/0.log" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425044 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlpb_a57c5d12-a4de-413c-a581-4b693550e8c3/ovn-controller/0.log" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425455 4925 generic.go:334] "Generic (PLEG): container finished" podID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerID="656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819" exitCode=0 Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425491 4925 generic.go:334] "Generic (PLEG): container finished" podID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerID="4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810" exitCode=0 Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425501 4925 generic.go:334] "Generic (PLEG): container finished" podID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerID="40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9" exitCode=0 Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425513 4925 generic.go:334] "Generic (PLEG): container finished" podID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerID="f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b" exitCode=0 Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425521 4925 generic.go:334] "Generic (PLEG): container finished" podID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerID="502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e" exitCode=0 Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425529 4925 generic.go:334] "Generic (PLEG): container finished" podID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerID="a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5" exitCode=0 Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425540 4925 generic.go:334] "Generic (PLEG): container finished" podID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerID="33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161" exitCode=143 Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425548 4925 generic.go:334] "Generic (PLEG): container finished" podID="a57c5d12-a4de-413c-a581-4b693550e8c3" containerID="e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8" exitCode=143 Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425558 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425614 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerDied","Data":"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425671 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerDied","Data":"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425686 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerDied","Data":"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425697 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerDied","Data":"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425708 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerDied","Data":"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425720 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerDied","Data":"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425735 4925 scope.go:117] "RemoveContainer" containerID="656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425752 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425766 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425773 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425780 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425786 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425793 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425800 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425806 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425819 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425828 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerDied","Data":"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425838 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425848 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425855 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425861 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425868 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425874 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425880 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425886 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425892 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425897 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425906 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerDied","Data":"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425916 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425923 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425932 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425938 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425944 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425949 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425955 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425961 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425966 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425972 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425980 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlpb" event={"ID":"a57c5d12-a4de-413c-a581-4b693550e8c3","Type":"ContainerDied","Data":"1b99cb00f8af15785503e47f7f140df80e76860f057c2cb3056d9138a36333bf"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425989 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.425996 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.426002 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.426008 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.426016 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.426023 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.426029 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.426037 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.426043 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.426049 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.427269 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q4rr9_b84c6881-f719-456f-9135-7dfb7688a48d/kube-multus/2.log" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.429582 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q4rr9_b84c6881-f719-456f-9135-7dfb7688a48d/kube-multus/1.log" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.429644 4925 generic.go:334] "Generic (PLEG): container finished" podID="b84c6881-f719-456f-9135-7dfb7688a48d" containerID="b84be1334f2ff06bf521e5ecdedb24f9d1ffe0fd8cd6bd23e7e3ee59feabaae5" exitCode=2 Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.429690 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q4rr9" event={"ID":"b84c6881-f719-456f-9135-7dfb7688a48d","Type":"ContainerDied","Data":"b84be1334f2ff06bf521e5ecdedb24f9d1ffe0fd8cd6bd23e7e3ee59feabaae5"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.429717 4925 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdc3f2b2681e98e48a3d3a5a2c79702766436ccef4ef7cd49600a53b58ca6032"} Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.430756 4925 scope.go:117] "RemoveContainer" containerID="b84be1334f2ff06bf521e5ecdedb24f9d1ffe0fd8cd6bd23e7e3ee59feabaae5" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.432232 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-q4rr9_openshift-multus(b84c6881-f719-456f-9135-7dfb7688a48d)\"" pod="openshift-multus/multus-q4rr9" podUID="b84c6881-f719-456f-9135-7dfb7688a48d" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.475521 4925 scope.go:117] "RemoveContainer" containerID="9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.510629 4925 scope.go:117] "RemoveContainer" containerID="4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.515508 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6rlpb"] Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.518498 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6rlpb"] Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.551332 4925 scope.go:117] "RemoveContainer" containerID="40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.564522 4925 scope.go:117] "RemoveContainer" containerID="f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.576917 4925 scope.go:117] "RemoveContainer" containerID="502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.588302 4925 scope.go:117] "RemoveContainer" containerID="a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.600901 4925 scope.go:117] "RemoveContainer" containerID="33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.614251 4925 scope.go:117] "RemoveContainer" containerID="e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.626780 4925 scope.go:117] "RemoveContainer" containerID="1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.642927 4925 scope.go:117] "RemoveContainer" containerID="656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.643353 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819\": container with ID starting with 656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819 not found: ID does not exist" containerID="656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.643382 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819"} err="failed to get container status \"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819\": rpc error: code = NotFound desc = could not find container \"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819\": container with ID starting with 656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.643401 4925 scope.go:117] "RemoveContainer" containerID="9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.643704 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d\": container with ID starting with 9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d not found: ID does not exist" containerID="9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.643728 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d"} err="failed to get container status \"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d\": rpc error: code = NotFound desc = could not find container \"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d\": container with ID starting with 9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.643743 4925 scope.go:117] "RemoveContainer" containerID="4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.644131 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\": container with ID starting with 4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810 not found: ID does not exist" containerID="4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.644152 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810"} err="failed to get container status \"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\": rpc error: code = NotFound desc = could not find container \"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\": container with ID starting with 4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.644164 4925 scope.go:117] "RemoveContainer" containerID="40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.644382 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\": container with ID starting with 40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9 not found: ID does not exist" containerID="40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.644401 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9"} err="failed to get container status \"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\": rpc error: code = NotFound desc = could not find container \"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\": container with ID starting with 40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.644414 4925 scope.go:117] "RemoveContainer" containerID="f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.644624 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\": container with ID starting with f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b not found: ID does not exist" containerID="f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.644653 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b"} err="failed to get container status \"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\": rpc error: code = NotFound desc = could not find container \"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\": container with ID starting with f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.644665 4925 scope.go:117] "RemoveContainer" containerID="502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.644843 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\": container with ID starting with 502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e not found: ID does not exist" containerID="502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.644860 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e"} err="failed to get container status \"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\": rpc error: code = NotFound desc = could not find container \"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\": container with ID starting with 502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.644871 4925 scope.go:117] "RemoveContainer" containerID="a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.645138 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\": container with ID starting with a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5 not found: ID does not exist" containerID="a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.645155 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5"} err="failed to get container status \"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\": rpc error: code = NotFound desc = could not find container \"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\": container with ID starting with a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.645167 4925 scope.go:117] "RemoveContainer" containerID="33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.645339 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\": container with ID starting with 33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161 not found: ID does not exist" containerID="33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.645356 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161"} err="failed to get container status \"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\": rpc error: code = NotFound desc = could not find container \"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\": container with ID starting with 33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.645366 4925 scope.go:117] "RemoveContainer" containerID="e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.645586 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\": container with ID starting with e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8 not found: ID does not exist" containerID="e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.645605 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8"} err="failed to get container status \"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\": rpc error: code = NotFound desc = could not find container \"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\": container with ID starting with e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.645618 4925 scope.go:117] "RemoveContainer" containerID="1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466" Feb 02 11:08:58 crc kubenswrapper[4925]: E0202 11:08:58.645910 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\": container with ID starting with 1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466 not found: ID does not exist" containerID="1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.645929 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466"} err="failed to get container status \"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\": rpc error: code = NotFound desc = could not find container \"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\": container with ID starting with 1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.645941 4925 scope.go:117] "RemoveContainer" containerID="656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.646131 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819"} err="failed to get container status \"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819\": rpc error: code = NotFound desc = could not find container \"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819\": container with ID starting with 656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.646147 4925 scope.go:117] "RemoveContainer" containerID="9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.646342 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d"} err="failed to get container status \"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d\": rpc error: code = NotFound desc = could not find container \"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d\": container with ID starting with 9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.646360 4925 scope.go:117] "RemoveContainer" containerID="4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.646560 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810"} err="failed to get container status \"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\": rpc error: code = NotFound desc = could not find container \"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\": container with ID starting with 4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.646577 4925 scope.go:117] "RemoveContainer" containerID="40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.646799 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9"} err="failed to get container status \"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\": rpc error: code = NotFound desc = could not find container \"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\": container with ID starting with 40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.646859 4925 scope.go:117] "RemoveContainer" containerID="f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.647211 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b"} err="failed to get container status \"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\": rpc error: code = NotFound desc = could not find container \"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\": container with ID starting with f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.647229 4925 scope.go:117] "RemoveContainer" containerID="502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.647538 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e"} err="failed to get container status \"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\": rpc error: code = NotFound desc = could not find container \"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\": container with ID starting with 502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.647577 4925 scope.go:117] "RemoveContainer" containerID="a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.647857 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5"} err="failed to get container status \"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\": rpc error: code = NotFound desc = could not find container \"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\": container with ID starting with a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.647875 4925 scope.go:117] "RemoveContainer" containerID="33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.648059 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161"} err="failed to get container status \"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\": rpc error: code = NotFound desc = could not find container \"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\": container with ID starting with 33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.648124 4925 scope.go:117] "RemoveContainer" containerID="e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.648348 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8"} err="failed to get container status \"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\": rpc error: code = NotFound desc = could not find container \"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\": container with ID starting with e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.648388 4925 scope.go:117] "RemoveContainer" containerID="1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.648709 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466"} err="failed to get container status \"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\": rpc error: code = NotFound desc = could not find container \"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\": container with ID starting with 1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.648731 4925 scope.go:117] "RemoveContainer" containerID="656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.649140 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819"} err="failed to get container status \"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819\": rpc error: code = NotFound desc = could not find container \"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819\": container with ID starting with 656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.649162 4925 scope.go:117] "RemoveContainer" containerID="9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.649404 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d"} err="failed to get container status \"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d\": rpc error: code = NotFound desc = could not find container \"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d\": container with ID starting with 9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.649422 4925 scope.go:117] "RemoveContainer" containerID="4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.649618 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810"} err="failed to get container status \"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\": rpc error: code = NotFound desc = could not find container \"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\": container with ID starting with 4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.649634 4925 scope.go:117] "RemoveContainer" containerID="40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.649796 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9"} err="failed to get container status \"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\": rpc error: code = NotFound desc = could not find container \"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\": container with ID starting with 40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.649810 4925 scope.go:117] "RemoveContainer" containerID="f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.649987 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b"} err="failed to get container status \"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\": rpc error: code = NotFound desc = could not find container \"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\": container with ID starting with f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.650006 4925 scope.go:117] "RemoveContainer" containerID="502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.650222 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e"} err="failed to get container status \"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\": rpc error: code = NotFound desc = could not find container \"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\": container with ID starting with 502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.650238 4925 scope.go:117] "RemoveContainer" containerID="a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.650498 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5"} err="failed to get container status \"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\": rpc error: code = NotFound desc = could not find container \"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\": container with ID starting with a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.650517 4925 scope.go:117] "RemoveContainer" containerID="33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.650734 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161"} err="failed to get container status \"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\": rpc error: code = NotFound desc = could not find container \"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\": container with ID starting with 33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.650752 4925 scope.go:117] "RemoveContainer" containerID="e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.650935 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8"} err="failed to get container status \"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\": rpc error: code = NotFound desc = could not find container \"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\": container with ID starting with e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.650951 4925 scope.go:117] "RemoveContainer" containerID="1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.651215 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466"} err="failed to get container status \"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\": rpc error: code = NotFound desc = could not find container \"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\": container with ID starting with 1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.651271 4925 scope.go:117] "RemoveContainer" containerID="656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.651537 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819"} err="failed to get container status \"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819\": rpc error: code = NotFound desc = could not find container \"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819\": container with ID starting with 656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.651557 4925 scope.go:117] "RemoveContainer" containerID="9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.651846 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d"} err="failed to get container status \"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d\": rpc error: code = NotFound desc = could not find container \"9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d\": container with ID starting with 9280b011bd96cd3017464d6a95585c70344870454ac708c07220262186a9109d not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.651885 4925 scope.go:117] "RemoveContainer" containerID="4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.652163 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810"} err="failed to get container status \"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\": rpc error: code = NotFound desc = could not find container \"4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810\": container with ID starting with 4ab7ff8f666a49622eb0fdcd2a6132688ad5b06e4030b22a7af4da5cbece7810 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.652199 4925 scope.go:117] "RemoveContainer" containerID="40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.652478 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9"} err="failed to get container status \"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\": rpc error: code = NotFound desc = could not find container \"40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9\": container with ID starting with 40286873e4adb01fe9cc1933664993d9ecae8ff944f447f0b72ff53bf73f79f9 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.652498 4925 scope.go:117] "RemoveContainer" containerID="f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.652788 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b"} err="failed to get container status \"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\": rpc error: code = NotFound desc = could not find container \"f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b\": container with ID starting with f02b9594ee3c79a0bd6cfeab8800b29aede0b183a613bf8ef3ca72c4efdea61b not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.652833 4925 scope.go:117] "RemoveContainer" containerID="502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.653317 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e"} err="failed to get container status \"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\": rpc error: code = NotFound desc = could not find container \"502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e\": container with ID starting with 502dff43817376cc02dd6ff7fcca97aad87e2decb7651d06e8680b6144c9e42e not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.653335 4925 scope.go:117] "RemoveContainer" containerID="a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.653538 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5"} err="failed to get container status \"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\": rpc error: code = NotFound desc = could not find container \"a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5\": container with ID starting with a28eed6b3319bc6c36f6fa1ec521c0ed70e97111dfa369ab057e28688b2b88c5 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.653557 4925 scope.go:117] "RemoveContainer" containerID="33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.653777 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161"} err="failed to get container status \"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\": rpc error: code = NotFound desc = could not find container \"33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161\": container with ID starting with 33f1c9ca8b902e8b2423f83e8d8676969b32fcf6cb97b7f5d792d921120d5161 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.653794 4925 scope.go:117] "RemoveContainer" containerID="e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.654088 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8"} err="failed to get container status \"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\": rpc error: code = NotFound desc = could not find container \"e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8\": container with ID starting with e78fc61d950f042505c7389179332980b43841c21d8151ca985f01d7b7e114c8 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.654106 4925 scope.go:117] "RemoveContainer" containerID="1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.654356 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466"} err="failed to get container status \"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\": rpc error: code = NotFound desc = could not find container \"1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466\": container with ID starting with 1d989851af1d17d193c34ebcb0fb03e14baadd29d734a17581887136ca438466 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.654388 4925 scope.go:117] "RemoveContainer" containerID="656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.654638 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819"} err="failed to get container status \"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819\": rpc error: code = NotFound desc = could not find container \"656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819\": container with ID starting with 656fdc113ef0ecf6ee4d827485a43ed5dea286a151683bb6aeb8f93e9dcc5819 not found: ID does not exist" Feb 02 11:08:58 crc kubenswrapper[4925]: I0202 11:08:58.671051 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a57c5d12-a4de-413c-a581-4b693550e8c3" path="/var/lib/kubelet/pods/a57c5d12-a4de-413c-a581-4b693550e8c3/volumes" Feb 02 11:08:59 crc kubenswrapper[4925]: I0202 11:08:59.446534 4925 generic.go:334] "Generic (PLEG): container finished" podID="6714b54f-a429-45cf-9521-501e3476a431" containerID="598d878a0658224ca143e758467b16c248bf09dac3453f6d3ed8d2bb4ef1042d" exitCode=0 Feb 02 11:08:59 crc kubenswrapper[4925]: I0202 11:08:59.446602 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" event={"ID":"6714b54f-a429-45cf-9521-501e3476a431","Type":"ContainerDied","Data":"598d878a0658224ca143e758467b16c248bf09dac3453f6d3ed8d2bb4ef1042d"} Feb 02 11:08:59 crc kubenswrapper[4925]: I0202 11:08:59.446646 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" event={"ID":"6714b54f-a429-45cf-9521-501e3476a431","Type":"ContainerStarted","Data":"fa51e14bea343e364c293c2c0ae9823e45d42a5adb2b65591aca1a7a223fabb1"} Feb 02 11:09:00 crc kubenswrapper[4925]: I0202 11:09:00.455371 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" event={"ID":"6714b54f-a429-45cf-9521-501e3476a431","Type":"ContainerStarted","Data":"7c9970017f02d901bfb04cc59d2741f591e35b77abb8bfdafa37fcf3436101bd"} Feb 02 11:09:00 crc kubenswrapper[4925]: I0202 11:09:00.455921 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" event={"ID":"6714b54f-a429-45cf-9521-501e3476a431","Type":"ContainerStarted","Data":"0fae1153a056d46e871d5a0703e5b71a023d0f464298dcd2e035194a7a5510df"} Feb 02 11:09:00 crc kubenswrapper[4925]: I0202 11:09:00.455930 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" event={"ID":"6714b54f-a429-45cf-9521-501e3476a431","Type":"ContainerStarted","Data":"c84ff5f59443675e4c4ea25cfa9009a33364bcef9bd0f1d5ab0be6da85488d5e"} Feb 02 11:09:00 crc kubenswrapper[4925]: I0202 11:09:00.455939 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" event={"ID":"6714b54f-a429-45cf-9521-501e3476a431","Type":"ContainerStarted","Data":"3e660910de59602c5fc84f5d6d0c5d9ec8ae299498d6074c6b126bedf4f3a946"} Feb 02 11:09:00 crc kubenswrapper[4925]: I0202 11:09:00.455946 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" event={"ID":"6714b54f-a429-45cf-9521-501e3476a431","Type":"ContainerStarted","Data":"fbe60e46dc1e947ced23652963cdbbd4ecc244353a7701836052e44d1964a71d"} Feb 02 11:09:01 crc kubenswrapper[4925]: I0202 11:09:01.465008 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" event={"ID":"6714b54f-a429-45cf-9521-501e3476a431","Type":"ContainerStarted","Data":"32804c2c78f5b931db23831e1f86e93fd98f55ebebb26725d621cd28b4f713ac"} Feb 02 11:09:03 crc kubenswrapper[4925]: I0202 11:09:03.480216 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" event={"ID":"6714b54f-a429-45cf-9521-501e3476a431","Type":"ContainerStarted","Data":"08777c720c59c34901bc1b980dba280dc780cae8e64e53b4aa4dd170434fc488"} Feb 02 11:09:04 crc kubenswrapper[4925]: I0202 11:09:04.971736 4925 scope.go:117] "RemoveContainer" containerID="fdc3f2b2681e98e48a3d3a5a2c79702766436ccef4ef7cd49600a53b58ca6032" Feb 02 11:09:05 crc kubenswrapper[4925]: I0202 11:09:05.494604 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q4rr9_b84c6881-f719-456f-9135-7dfb7688a48d/kube-multus/2.log" Feb 02 11:09:06 crc kubenswrapper[4925]: I0202 11:09:06.506931 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" event={"ID":"6714b54f-a429-45cf-9521-501e3476a431","Type":"ContainerStarted","Data":"cd523853c96f44b06b1d0c465b856398ba78c817d68d13ab36a4f26af58bc25e"} Feb 02 11:09:06 crc kubenswrapper[4925]: I0202 11:09:06.507343 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:09:06 crc kubenswrapper[4925]: I0202 11:09:06.507390 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:09:06 crc kubenswrapper[4925]: I0202 11:09:06.507402 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:09:06 crc kubenswrapper[4925]: I0202 11:09:06.539629 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:09:06 crc kubenswrapper[4925]: I0202 11:09:06.539762 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:09:06 crc kubenswrapper[4925]: I0202 11:09:06.546467 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" podStartSLOduration=8.546434899 podStartE2EDuration="8.546434899s" podCreationTimestamp="2026-02-02 11:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:09:06.540672414 +0000 UTC m=+723.544921386" watchObservedRunningTime="2026-02-02 11:09:06.546434899 +0000 UTC m=+723.550683861" Feb 02 11:09:11 crc kubenswrapper[4925]: I0202 11:09:11.637066 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v"] Feb 02 11:09:11 crc kubenswrapper[4925]: I0202 11:09:11.639106 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:11 crc kubenswrapper[4925]: I0202 11:09:11.640730 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 11:09:11 crc kubenswrapper[4925]: I0202 11:09:11.654661 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v"] Feb 02 11:09:11 crc kubenswrapper[4925]: I0202 11:09:11.722001 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6643842e-f888-4afc-ac1c-c2e7ef17360d-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v\" (UID: \"6643842e-f888-4afc-ac1c-c2e7ef17360d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:11 crc kubenswrapper[4925]: I0202 11:09:11.723034 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6643842e-f888-4afc-ac1c-c2e7ef17360d-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v\" (UID: \"6643842e-f888-4afc-ac1c-c2e7ef17360d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:11 crc kubenswrapper[4925]: I0202 11:09:11.723134 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cjtg\" (UniqueName: \"kubernetes.io/projected/6643842e-f888-4afc-ac1c-c2e7ef17360d-kube-api-access-8cjtg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v\" (UID: \"6643842e-f888-4afc-ac1c-c2e7ef17360d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:11 crc kubenswrapper[4925]: I0202 11:09:11.825118 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6643842e-f888-4afc-ac1c-c2e7ef17360d-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v\" (UID: \"6643842e-f888-4afc-ac1c-c2e7ef17360d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:11 crc kubenswrapper[4925]: I0202 11:09:11.825224 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6643842e-f888-4afc-ac1c-c2e7ef17360d-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v\" (UID: \"6643842e-f888-4afc-ac1c-c2e7ef17360d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:11 crc kubenswrapper[4925]: I0202 11:09:11.825255 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cjtg\" (UniqueName: \"kubernetes.io/projected/6643842e-f888-4afc-ac1c-c2e7ef17360d-kube-api-access-8cjtg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v\" (UID: \"6643842e-f888-4afc-ac1c-c2e7ef17360d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:11 crc kubenswrapper[4925]: I0202 11:09:11.825710 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6643842e-f888-4afc-ac1c-c2e7ef17360d-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v\" (UID: \"6643842e-f888-4afc-ac1c-c2e7ef17360d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:11 crc kubenswrapper[4925]: I0202 11:09:11.825943 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6643842e-f888-4afc-ac1c-c2e7ef17360d-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v\" (UID: \"6643842e-f888-4afc-ac1c-c2e7ef17360d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:11 crc kubenswrapper[4925]: I0202 11:09:11.844364 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cjtg\" (UniqueName: \"kubernetes.io/projected/6643842e-f888-4afc-ac1c-c2e7ef17360d-kube-api-access-8cjtg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v\" (UID: \"6643842e-f888-4afc-ac1c-c2e7ef17360d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:11 crc kubenswrapper[4925]: I0202 11:09:11.955332 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:12 crc kubenswrapper[4925]: E0202 11:09:12.104162 4925 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_openshift-marketplace_6643842e-f888-4afc-ac1c-c2e7ef17360d_0(b19ae5b2f04ab915a47e786d0fe3c544d7659189012a7b50eaa0d063bfc7a415): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 11:09:12 crc kubenswrapper[4925]: E0202 11:09:12.104395 4925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_openshift-marketplace_6643842e-f888-4afc-ac1c-c2e7ef17360d_0(b19ae5b2f04ab915a47e786d0fe3c544d7659189012a7b50eaa0d063bfc7a415): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:12 crc kubenswrapper[4925]: E0202 11:09:12.104539 4925 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_openshift-marketplace_6643842e-f888-4afc-ac1c-c2e7ef17360d_0(b19ae5b2f04ab915a47e786d0fe3c544d7659189012a7b50eaa0d063bfc7a415): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:12 crc kubenswrapper[4925]: E0202 11:09:12.104718 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_openshift-marketplace(6643842e-f888-4afc-ac1c-c2e7ef17360d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_openshift-marketplace(6643842e-f888-4afc-ac1c-c2e7ef17360d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_openshift-marketplace_6643842e-f888-4afc-ac1c-c2e7ef17360d_0(b19ae5b2f04ab915a47e786d0fe3c544d7659189012a7b50eaa0d063bfc7a415): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" podUID="6643842e-f888-4afc-ac1c-c2e7ef17360d" Feb 02 11:09:12 crc kubenswrapper[4925]: I0202 11:09:12.540974 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:12 crc kubenswrapper[4925]: I0202 11:09:12.541767 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:12 crc kubenswrapper[4925]: E0202 11:09:12.584994 4925 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_openshift-marketplace_6643842e-f888-4afc-ac1c-c2e7ef17360d_0(f0fe9541a659390d417bf8cc67f60ebce06a1ffa0e51c5bbc2c1ce393c62ce1f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 11:09:12 crc kubenswrapper[4925]: E0202 11:09:12.585117 4925 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_openshift-marketplace_6643842e-f888-4afc-ac1c-c2e7ef17360d_0(f0fe9541a659390d417bf8cc67f60ebce06a1ffa0e51c5bbc2c1ce393c62ce1f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:12 crc kubenswrapper[4925]: E0202 11:09:12.585160 4925 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_openshift-marketplace_6643842e-f888-4afc-ac1c-c2e7ef17360d_0(f0fe9541a659390d417bf8cc67f60ebce06a1ffa0e51c5bbc2c1ce393c62ce1f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:12 crc kubenswrapper[4925]: E0202 11:09:12.585223 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_openshift-marketplace(6643842e-f888-4afc-ac1c-c2e7ef17360d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_openshift-marketplace(6643842e-f888-4afc-ac1c-c2e7ef17360d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_openshift-marketplace_6643842e-f888-4afc-ac1c-c2e7ef17360d_0(f0fe9541a659390d417bf8cc67f60ebce06a1ffa0e51c5bbc2c1ce393c62ce1f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" podUID="6643842e-f888-4afc-ac1c-c2e7ef17360d" Feb 02 11:09:13 crc kubenswrapper[4925]: I0202 11:09:13.398463 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:09:13 crc kubenswrapper[4925]: I0202 11:09:13.399740 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:09:13 crc kubenswrapper[4925]: I0202 11:09:13.664120 4925 scope.go:117] "RemoveContainer" containerID="b84be1334f2ff06bf521e5ecdedb24f9d1ffe0fd8cd6bd23e7e3ee59feabaae5" Feb 02 11:09:13 crc kubenswrapper[4925]: E0202 11:09:13.664297 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-q4rr9_openshift-multus(b84c6881-f719-456f-9135-7dfb7688a48d)\"" pod="openshift-multus/multus-q4rr9" podUID="b84c6881-f719-456f-9135-7dfb7688a48d" Feb 02 11:09:24 crc kubenswrapper[4925]: I0202 11:09:24.668245 4925 scope.go:117] "RemoveContainer" containerID="b84be1334f2ff06bf521e5ecdedb24f9d1ffe0fd8cd6bd23e7e3ee59feabaae5" Feb 02 11:09:25 crc kubenswrapper[4925]: I0202 11:09:25.645701 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q4rr9_b84c6881-f719-456f-9135-7dfb7688a48d/kube-multus/2.log" Feb 02 11:09:25 crc kubenswrapper[4925]: I0202 11:09:25.646013 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q4rr9" event={"ID":"b84c6881-f719-456f-9135-7dfb7688a48d","Type":"ContainerStarted","Data":"7079dba6687dc1a898ea9accc3f5c15b3b22306faee76f716c15452e270a6075"} Feb 02 11:09:27 crc kubenswrapper[4925]: I0202 11:09:27.664168 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:27 crc kubenswrapper[4925]: I0202 11:09:27.664679 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:27 crc kubenswrapper[4925]: I0202 11:09:27.915377 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v"] Feb 02 11:09:27 crc kubenswrapper[4925]: W0202 11:09:27.920799 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6643842e_f888_4afc_ac1c_c2e7ef17360d.slice/crio-3922fc450742f847e385fdf7f2c47421036edfe96e4b6a267d8b8e1b83114891 WatchSource:0}: Error finding container 3922fc450742f847e385fdf7f2c47421036edfe96e4b6a267d8b8e1b83114891: Status 404 returned error can't find the container with id 3922fc450742f847e385fdf7f2c47421036edfe96e4b6a267d8b8e1b83114891 Feb 02 11:09:28 crc kubenswrapper[4925]: I0202 11:09:28.465266 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dzkbp" Feb 02 11:09:28 crc kubenswrapper[4925]: I0202 11:09:28.669603 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" event={"ID":"6643842e-f888-4afc-ac1c-c2e7ef17360d","Type":"ContainerStarted","Data":"f1b05ccf79ee8589d7407ca60ef936cde80adc2698c4dd0ecbd8f185559fb3c8"} Feb 02 11:09:28 crc kubenswrapper[4925]: I0202 11:09:28.669647 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" event={"ID":"6643842e-f888-4afc-ac1c-c2e7ef17360d","Type":"ContainerStarted","Data":"3922fc450742f847e385fdf7f2c47421036edfe96e4b6a267d8b8e1b83114891"} Feb 02 11:09:29 crc kubenswrapper[4925]: I0202 11:09:29.672527 4925 generic.go:334] "Generic (PLEG): container finished" podID="6643842e-f888-4afc-ac1c-c2e7ef17360d" containerID="f1b05ccf79ee8589d7407ca60ef936cde80adc2698c4dd0ecbd8f185559fb3c8" exitCode=0 Feb 02 11:09:29 crc kubenswrapper[4925]: I0202 11:09:29.672619 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" event={"ID":"6643842e-f888-4afc-ac1c-c2e7ef17360d","Type":"ContainerDied","Data":"f1b05ccf79ee8589d7407ca60ef936cde80adc2698c4dd0ecbd8f185559fb3c8"} Feb 02 11:09:32 crc kubenswrapper[4925]: I0202 11:09:32.698704 4925 generic.go:334] "Generic (PLEG): container finished" podID="6643842e-f888-4afc-ac1c-c2e7ef17360d" containerID="c6318a4aa270ce7d9d2cbf5c4b12b395777371713b373c59aa9f313114efbd43" exitCode=0 Feb 02 11:09:32 crc kubenswrapper[4925]: I0202 11:09:32.698761 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" event={"ID":"6643842e-f888-4afc-ac1c-c2e7ef17360d","Type":"ContainerDied","Data":"c6318a4aa270ce7d9d2cbf5c4b12b395777371713b373c59aa9f313114efbd43"} Feb 02 11:09:33 crc kubenswrapper[4925]: I0202 11:09:33.713866 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" event={"ID":"6643842e-f888-4afc-ac1c-c2e7ef17360d","Type":"ContainerStarted","Data":"48d56c6a7b283e6b516ec43b7b5eea38f3acba6a43f2323b65e1e9d387d1d24d"} Feb 02 11:09:33 crc kubenswrapper[4925]: I0202 11:09:33.737783 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" podStartSLOduration=20.009306562 podStartE2EDuration="22.737761344s" podCreationTimestamp="2026-02-02 11:09:11 +0000 UTC" firstStartedPulling="2026-02-02 11:09:29.67390948 +0000 UTC m=+746.678158442" lastFinishedPulling="2026-02-02 11:09:32.402364262 +0000 UTC m=+749.406613224" observedRunningTime="2026-02-02 11:09:33.737331753 +0000 UTC m=+750.741580715" watchObservedRunningTime="2026-02-02 11:09:33.737761344 +0000 UTC m=+750.742010306" Feb 02 11:09:43 crc kubenswrapper[4925]: I0202 11:09:43.398842 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:09:43 crc kubenswrapper[4925]: I0202 11:09:43.399732 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:09:43 crc kubenswrapper[4925]: I0202 11:09:43.784815 4925 generic.go:334] "Generic (PLEG): container finished" podID="6643842e-f888-4afc-ac1c-c2e7ef17360d" containerID="48d56c6a7b283e6b516ec43b7b5eea38f3acba6a43f2323b65e1e9d387d1d24d" exitCode=0 Feb 02 11:09:43 crc kubenswrapper[4925]: I0202 11:09:43.784916 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" event={"ID":"6643842e-f888-4afc-ac1c-c2e7ef17360d","Type":"ContainerDied","Data":"48d56c6a7b283e6b516ec43b7b5eea38f3acba6a43f2323b65e1e9d387d1d24d"} Feb 02 11:09:45 crc kubenswrapper[4925]: I0202 11:09:45.095238 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:45 crc kubenswrapper[4925]: I0202 11:09:45.217978 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cjtg\" (UniqueName: \"kubernetes.io/projected/6643842e-f888-4afc-ac1c-c2e7ef17360d-kube-api-access-8cjtg\") pod \"6643842e-f888-4afc-ac1c-c2e7ef17360d\" (UID: \"6643842e-f888-4afc-ac1c-c2e7ef17360d\") " Feb 02 11:09:45 crc kubenswrapper[4925]: I0202 11:09:45.218050 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6643842e-f888-4afc-ac1c-c2e7ef17360d-util\") pod \"6643842e-f888-4afc-ac1c-c2e7ef17360d\" (UID: \"6643842e-f888-4afc-ac1c-c2e7ef17360d\") " Feb 02 11:09:45 crc kubenswrapper[4925]: I0202 11:09:45.218239 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6643842e-f888-4afc-ac1c-c2e7ef17360d-bundle\") pod \"6643842e-f888-4afc-ac1c-c2e7ef17360d\" (UID: \"6643842e-f888-4afc-ac1c-c2e7ef17360d\") " Feb 02 11:09:45 crc kubenswrapper[4925]: I0202 11:09:45.220471 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6643842e-f888-4afc-ac1c-c2e7ef17360d-bundle" (OuterVolumeSpecName: "bundle") pod "6643842e-f888-4afc-ac1c-c2e7ef17360d" (UID: "6643842e-f888-4afc-ac1c-c2e7ef17360d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:09:45 crc kubenswrapper[4925]: I0202 11:09:45.229381 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6643842e-f888-4afc-ac1c-c2e7ef17360d-kube-api-access-8cjtg" (OuterVolumeSpecName: "kube-api-access-8cjtg") pod "6643842e-f888-4afc-ac1c-c2e7ef17360d" (UID: "6643842e-f888-4afc-ac1c-c2e7ef17360d"). InnerVolumeSpecName "kube-api-access-8cjtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:09:45 crc kubenswrapper[4925]: I0202 11:09:45.237494 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6643842e-f888-4afc-ac1c-c2e7ef17360d-util" (OuterVolumeSpecName: "util") pod "6643842e-f888-4afc-ac1c-c2e7ef17360d" (UID: "6643842e-f888-4afc-ac1c-c2e7ef17360d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:09:45 crc kubenswrapper[4925]: I0202 11:09:45.319608 4925 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6643842e-f888-4afc-ac1c-c2e7ef17360d-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:09:45 crc kubenswrapper[4925]: I0202 11:09:45.319664 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cjtg\" (UniqueName: \"kubernetes.io/projected/6643842e-f888-4afc-ac1c-c2e7ef17360d-kube-api-access-8cjtg\") on node \"crc\" DevicePath \"\"" Feb 02 11:09:45 crc kubenswrapper[4925]: I0202 11:09:45.319689 4925 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6643842e-f888-4afc-ac1c-c2e7ef17360d-util\") on node \"crc\" DevicePath \"\"" Feb 02 11:09:45 crc kubenswrapper[4925]: I0202 11:09:45.801428 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" event={"ID":"6643842e-f888-4afc-ac1c-c2e7ef17360d","Type":"ContainerDied","Data":"3922fc450742f847e385fdf7f2c47421036edfe96e4b6a267d8b8e1b83114891"} Feb 02 11:09:45 crc kubenswrapper[4925]: I0202 11:09:45.801485 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3922fc450742f847e385fdf7f2c47421036edfe96e4b6a267d8b8e1b83114891" Feb 02 11:09:45 crc kubenswrapper[4925]: I0202 11:09:45.801491 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v" Feb 02 11:09:46 crc kubenswrapper[4925]: I0202 11:09:46.703921 4925 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 11:09:48 crc kubenswrapper[4925]: I0202 11:09:48.356608 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tm82s"] Feb 02 11:09:48 crc kubenswrapper[4925]: E0202 11:09:48.357216 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6643842e-f888-4afc-ac1c-c2e7ef17360d" containerName="util" Feb 02 11:09:48 crc kubenswrapper[4925]: I0202 11:09:48.357231 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6643842e-f888-4afc-ac1c-c2e7ef17360d" containerName="util" Feb 02 11:09:48 crc kubenswrapper[4925]: E0202 11:09:48.357241 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6643842e-f888-4afc-ac1c-c2e7ef17360d" containerName="pull" Feb 02 11:09:48 crc kubenswrapper[4925]: I0202 11:09:48.357247 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6643842e-f888-4afc-ac1c-c2e7ef17360d" containerName="pull" Feb 02 11:09:48 crc kubenswrapper[4925]: E0202 11:09:48.357260 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6643842e-f888-4afc-ac1c-c2e7ef17360d" containerName="extract" Feb 02 11:09:48 crc kubenswrapper[4925]: I0202 11:09:48.357267 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6643842e-f888-4afc-ac1c-c2e7ef17360d" containerName="extract" Feb 02 11:09:48 crc kubenswrapper[4925]: I0202 11:09:48.357378 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="6643842e-f888-4afc-ac1c-c2e7ef17360d" containerName="extract" Feb 02 11:09:48 crc kubenswrapper[4925]: I0202 11:09:48.357714 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-tm82s" Feb 02 11:09:48 crc kubenswrapper[4925]: I0202 11:09:48.359455 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-26gjd" Feb 02 11:09:48 crc kubenswrapper[4925]: I0202 11:09:48.359810 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 02 11:09:48 crc kubenswrapper[4925]: I0202 11:09:48.360143 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 02 11:09:48 crc kubenswrapper[4925]: I0202 11:09:48.372493 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tm82s"] Feb 02 11:09:48 crc kubenswrapper[4925]: I0202 11:09:48.457273 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl55z\" (UniqueName: \"kubernetes.io/projected/fd2e1ecb-2c35-4496-8679-da6345ee07a2-kube-api-access-rl55z\") pod \"nmstate-operator-646758c888-tm82s\" (UID: \"fd2e1ecb-2c35-4496-8679-da6345ee07a2\") " pod="openshift-nmstate/nmstate-operator-646758c888-tm82s" Feb 02 11:09:48 crc kubenswrapper[4925]: I0202 11:09:48.563223 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl55z\" (UniqueName: \"kubernetes.io/projected/fd2e1ecb-2c35-4496-8679-da6345ee07a2-kube-api-access-rl55z\") pod \"nmstate-operator-646758c888-tm82s\" (UID: \"fd2e1ecb-2c35-4496-8679-da6345ee07a2\") " pod="openshift-nmstate/nmstate-operator-646758c888-tm82s" Feb 02 11:09:48 crc kubenswrapper[4925]: I0202 11:09:48.583594 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl55z\" (UniqueName: \"kubernetes.io/projected/fd2e1ecb-2c35-4496-8679-da6345ee07a2-kube-api-access-rl55z\") pod \"nmstate-operator-646758c888-tm82s\" (UID: \"fd2e1ecb-2c35-4496-8679-da6345ee07a2\") " pod="openshift-nmstate/nmstate-operator-646758c888-tm82s" Feb 02 11:09:48 crc kubenswrapper[4925]: I0202 11:09:48.674372 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-tm82s" Feb 02 11:09:48 crc kubenswrapper[4925]: I0202 11:09:48.913069 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tm82s"] Feb 02 11:09:49 crc kubenswrapper[4925]: I0202 11:09:49.825713 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-tm82s" event={"ID":"fd2e1ecb-2c35-4496-8679-da6345ee07a2","Type":"ContainerStarted","Data":"6a03da5d9796fb16ac5a08611122f50864e7a7361c88aa30a1b660707565aa60"} Feb 02 11:09:52 crc kubenswrapper[4925]: I0202 11:09:52.843533 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-tm82s" event={"ID":"fd2e1ecb-2c35-4496-8679-da6345ee07a2","Type":"ContainerStarted","Data":"d39fa249c0070511caab62c3932c1facc92be9db1386a6605da6243ca6dc7293"} Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.731948 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-tm82s" podStartSLOduration=2.675107818 podStartE2EDuration="5.731933218s" podCreationTimestamp="2026-02-02 11:09:48 +0000 UTC" firstStartedPulling="2026-02-02 11:09:48.928118208 +0000 UTC m=+765.932367170" lastFinishedPulling="2026-02-02 11:09:51.984943608 +0000 UTC m=+768.989192570" observedRunningTime="2026-02-02 11:09:52.863971436 +0000 UTC m=+769.868220418" watchObservedRunningTime="2026-02-02 11:09:53.731933218 +0000 UTC m=+770.736182180" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.734398 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hzmqd"] Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.735219 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-hzmqd" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.739744 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-7lvnn" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.748894 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-j84xr"] Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.749809 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j84xr" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.751584 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.757187 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-26h6v"] Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.758232 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-26h6v" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.762459 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hzmqd"] Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.765490 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-j84xr"] Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.858727 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkwr2\" (UniqueName: \"kubernetes.io/projected/658d0400-3726-4797-a477-8d95c17ccd3a-kube-api-access-bkwr2\") pod \"nmstate-handler-26h6v\" (UID: \"658d0400-3726-4797-a477-8d95c17ccd3a\") " pod="openshift-nmstate/nmstate-handler-26h6v" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.859692 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rktgb\" (UniqueName: \"kubernetes.io/projected/3d287bf3-d7ef-4ccf-ad54-c56563a8092c-kube-api-access-rktgb\") pod \"nmstate-metrics-54757c584b-hzmqd\" (UID: \"3d287bf3-d7ef-4ccf-ad54-c56563a8092c\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hzmqd" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.859795 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/658d0400-3726-4797-a477-8d95c17ccd3a-dbus-socket\") pod \"nmstate-handler-26h6v\" (UID: \"658d0400-3726-4797-a477-8d95c17ccd3a\") " pod="openshift-nmstate/nmstate-handler-26h6v" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.859912 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6w5t\" (UniqueName: \"kubernetes.io/projected/21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd-kube-api-access-h6w5t\") pod \"nmstate-webhook-8474b5b9d8-j84xr\" (UID: \"21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j84xr" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.859996 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-j84xr\" (UID: \"21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j84xr" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.860096 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/658d0400-3726-4797-a477-8d95c17ccd3a-ovs-socket\") pod \"nmstate-handler-26h6v\" (UID: \"658d0400-3726-4797-a477-8d95c17ccd3a\") " pod="openshift-nmstate/nmstate-handler-26h6v" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.860203 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/658d0400-3726-4797-a477-8d95c17ccd3a-nmstate-lock\") pod \"nmstate-handler-26h6v\" (UID: \"658d0400-3726-4797-a477-8d95c17ccd3a\") " pod="openshift-nmstate/nmstate-handler-26h6v" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.879851 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wvpzr"] Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.880476 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wvpzr" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.882281 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.882864 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-jjh2h" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.883035 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.901227 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wvpzr"] Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.961558 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkwr2\" (UniqueName: \"kubernetes.io/projected/658d0400-3726-4797-a477-8d95c17ccd3a-kube-api-access-bkwr2\") pod \"nmstate-handler-26h6v\" (UID: \"658d0400-3726-4797-a477-8d95c17ccd3a\") " pod="openshift-nmstate/nmstate-handler-26h6v" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.961653 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rktgb\" (UniqueName: \"kubernetes.io/projected/3d287bf3-d7ef-4ccf-ad54-c56563a8092c-kube-api-access-rktgb\") pod \"nmstate-metrics-54757c584b-hzmqd\" (UID: \"3d287bf3-d7ef-4ccf-ad54-c56563a8092c\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hzmqd" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.961693 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/658d0400-3726-4797-a477-8d95c17ccd3a-dbus-socket\") pod \"nmstate-handler-26h6v\" (UID: \"658d0400-3726-4797-a477-8d95c17ccd3a\") " pod="openshift-nmstate/nmstate-handler-26h6v" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.961740 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6w5t\" (UniqueName: \"kubernetes.io/projected/21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd-kube-api-access-h6w5t\") pod \"nmstate-webhook-8474b5b9d8-j84xr\" (UID: \"21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j84xr" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.961770 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-j84xr\" (UID: \"21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j84xr" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.961793 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/658d0400-3726-4797-a477-8d95c17ccd3a-ovs-socket\") pod \"nmstate-handler-26h6v\" (UID: \"658d0400-3726-4797-a477-8d95c17ccd3a\") " pod="openshift-nmstate/nmstate-handler-26h6v" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.961831 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/658d0400-3726-4797-a477-8d95c17ccd3a-nmstate-lock\") pod \"nmstate-handler-26h6v\" (UID: \"658d0400-3726-4797-a477-8d95c17ccd3a\") " pod="openshift-nmstate/nmstate-handler-26h6v" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.961908 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/658d0400-3726-4797-a477-8d95c17ccd3a-nmstate-lock\") pod \"nmstate-handler-26h6v\" (UID: \"658d0400-3726-4797-a477-8d95c17ccd3a\") " pod="openshift-nmstate/nmstate-handler-26h6v" Feb 02 11:09:53 crc kubenswrapper[4925]: E0202 11:09:53.961986 4925 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.962016 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/658d0400-3726-4797-a477-8d95c17ccd3a-ovs-socket\") pod \"nmstate-handler-26h6v\" (UID: \"658d0400-3726-4797-a477-8d95c17ccd3a\") " pod="openshift-nmstate/nmstate-handler-26h6v" Feb 02 11:09:53 crc kubenswrapper[4925]: E0202 11:09:53.962099 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd-tls-key-pair podName:21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd nodeName:}" failed. No retries permitted until 2026-02-02 11:09:54.462055368 +0000 UTC m=+771.466304420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-j84xr" (UID: "21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd") : secret "openshift-nmstate-webhook" not found Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.963287 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/658d0400-3726-4797-a477-8d95c17ccd3a-dbus-socket\") pod \"nmstate-handler-26h6v\" (UID: \"658d0400-3726-4797-a477-8d95c17ccd3a\") " pod="openshift-nmstate/nmstate-handler-26h6v" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.980031 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6w5t\" (UniqueName: \"kubernetes.io/projected/21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd-kube-api-access-h6w5t\") pod \"nmstate-webhook-8474b5b9d8-j84xr\" (UID: \"21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j84xr" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.981129 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rktgb\" (UniqueName: \"kubernetes.io/projected/3d287bf3-d7ef-4ccf-ad54-c56563a8092c-kube-api-access-rktgb\") pod \"nmstate-metrics-54757c584b-hzmqd\" (UID: \"3d287bf3-d7ef-4ccf-ad54-c56563a8092c\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hzmqd" Feb 02 11:09:53 crc kubenswrapper[4925]: I0202 11:09:53.982967 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkwr2\" (UniqueName: \"kubernetes.io/projected/658d0400-3726-4797-a477-8d95c17ccd3a-kube-api-access-bkwr2\") pod \"nmstate-handler-26h6v\" (UID: \"658d0400-3726-4797-a477-8d95c17ccd3a\") " pod="openshift-nmstate/nmstate-handler-26h6v" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.062597 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9wvn\" (UniqueName: \"kubernetes.io/projected/fdf9fdc0-d0bc-48eb-881f-9f053560d16d-kube-api-access-d9wvn\") pod \"nmstate-console-plugin-7754f76f8b-wvpzr\" (UID: \"fdf9fdc0-d0bc-48eb-881f-9f053560d16d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wvpzr" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.063211 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fdf9fdc0-d0bc-48eb-881f-9f053560d16d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wvpzr\" (UID: \"fdf9fdc0-d0bc-48eb-881f-9f053560d16d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wvpzr" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.063316 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdf9fdc0-d0bc-48eb-881f-9f053560d16d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wvpzr\" (UID: \"fdf9fdc0-d0bc-48eb-881f-9f053560d16d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wvpzr" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.091313 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76968fc589-cpk94"] Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.091917 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.107504 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-hzmqd" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.114274 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76968fc589-cpk94"] Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.130664 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-26h6v" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.164164 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdf9fdc0-d0bc-48eb-881f-9f053560d16d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wvpzr\" (UID: \"fdf9fdc0-d0bc-48eb-881f-9f053560d16d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wvpzr" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.164242 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9wvn\" (UniqueName: \"kubernetes.io/projected/fdf9fdc0-d0bc-48eb-881f-9f053560d16d-kube-api-access-d9wvn\") pod \"nmstate-console-plugin-7754f76f8b-wvpzr\" (UID: \"fdf9fdc0-d0bc-48eb-881f-9f053560d16d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wvpzr" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.164287 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fdf9fdc0-d0bc-48eb-881f-9f053560d16d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wvpzr\" (UID: \"fdf9fdc0-d0bc-48eb-881f-9f053560d16d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wvpzr" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.166176 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fdf9fdc0-d0bc-48eb-881f-9f053560d16d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wvpzr\" (UID: \"fdf9fdc0-d0bc-48eb-881f-9f053560d16d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wvpzr" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.169169 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdf9fdc0-d0bc-48eb-881f-9f053560d16d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wvpzr\" (UID: \"fdf9fdc0-d0bc-48eb-881f-9f053560d16d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wvpzr" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.180634 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9wvn\" (UniqueName: \"kubernetes.io/projected/fdf9fdc0-d0bc-48eb-881f-9f053560d16d-kube-api-access-d9wvn\") pod \"nmstate-console-plugin-7754f76f8b-wvpzr\" (UID: \"fdf9fdc0-d0bc-48eb-881f-9f053560d16d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wvpzr" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.193482 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wvpzr" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.265767 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-oauth-serving-cert\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.266183 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-trusted-ca-bundle\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.266248 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-console-serving-cert\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.266272 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-console-config\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.266317 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-service-ca\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.266339 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-console-oauth-config\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.266373 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4prdk\" (UniqueName: \"kubernetes.io/projected/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-kube-api-access-4prdk\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.338150 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hzmqd"] Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.368279 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-trusted-ca-bundle\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.368343 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-console-serving-cert\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.368372 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-console-config\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.368411 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-service-ca\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.368445 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-console-oauth-config\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.368619 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4prdk\" (UniqueName: \"kubernetes.io/projected/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-kube-api-access-4prdk\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.368959 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-oauth-serving-cert\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.369507 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-console-config\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.369583 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-service-ca\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.369681 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-oauth-serving-cert\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.370604 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-trusted-ca-bundle\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.373198 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-console-oauth-config\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.373205 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-console-serving-cert\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.390714 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4prdk\" (UniqueName: \"kubernetes.io/projected/3c8c74cc-4425-4704-b4be-d7b8d0cc00eb-kube-api-access-4prdk\") pod \"console-76968fc589-cpk94\" (UID: \"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb\") " pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.404961 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wvpzr"] Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.409108 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:09:54 crc kubenswrapper[4925]: W0202 11:09:54.410115 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdf9fdc0_d0bc_48eb_881f_9f053560d16d.slice/crio-31ab6c669ad17d9cdcf9bf8d7d6388d960f321461a718aee3c9cf96e06343750 WatchSource:0}: Error finding container 31ab6c669ad17d9cdcf9bf8d7d6388d960f321461a718aee3c9cf96e06343750: Status 404 returned error can't find the container with id 31ab6c669ad17d9cdcf9bf8d7d6388d960f321461a718aee3c9cf96e06343750 Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.470725 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-j84xr\" (UID: \"21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j84xr" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.478302 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-j84xr\" (UID: \"21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j84xr" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.616280 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76968fc589-cpk94"] Feb 02 11:09:54 crc kubenswrapper[4925]: W0202 11:09:54.622347 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c8c74cc_4425_4704_b4be_d7b8d0cc00eb.slice/crio-37e60c6c85aa2da45561e9d0676bfed1054e8b0f1ac3fe39ac630f82b087f4e2 WatchSource:0}: Error finding container 37e60c6c85aa2da45561e9d0676bfed1054e8b0f1ac3fe39ac630f82b087f4e2: Status 404 returned error can't find the container with id 37e60c6c85aa2da45561e9d0676bfed1054e8b0f1ac3fe39ac630f82b087f4e2 Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.727882 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j84xr" Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.866106 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hzmqd" event={"ID":"3d287bf3-d7ef-4ccf-ad54-c56563a8092c","Type":"ContainerStarted","Data":"34ca10fefdd763459b9bb1d1f393ea7590e1229c256c3bc78305a1dee57d7c51"} Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.875334 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wvpzr" event={"ID":"fdf9fdc0-d0bc-48eb-881f-9f053560d16d","Type":"ContainerStarted","Data":"31ab6c669ad17d9cdcf9bf8d7d6388d960f321461a718aee3c9cf96e06343750"} Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.878662 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76968fc589-cpk94" event={"ID":"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb","Type":"ContainerStarted","Data":"13ddfdd4f1e0ee55ba70111a64073a60328c948d62c7872956d4b00f581c77ac"} Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.878713 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76968fc589-cpk94" event={"ID":"3c8c74cc-4425-4704-b4be-d7b8d0cc00eb","Type":"ContainerStarted","Data":"37e60c6c85aa2da45561e9d0676bfed1054e8b0f1ac3fe39ac630f82b087f4e2"} Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.882036 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-26h6v" event={"ID":"658d0400-3726-4797-a477-8d95c17ccd3a","Type":"ContainerStarted","Data":"2e5b58329b048fa33f21e827193f188a75c0c719255cb23e0f799c8c9a33d2e4"} Feb 02 11:09:54 crc kubenswrapper[4925]: I0202 11:09:54.895998 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76968fc589-cpk94" podStartSLOduration=0.895951234 podStartE2EDuration="895.951234ms" podCreationTimestamp="2026-02-02 11:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:09:54.892616064 +0000 UTC m=+771.896865036" watchObservedRunningTime="2026-02-02 11:09:54.895951234 +0000 UTC m=+771.900200206" Feb 02 11:09:55 crc kubenswrapper[4925]: I0202 11:09:55.132456 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-j84xr"] Feb 02 11:09:55 crc kubenswrapper[4925]: I0202 11:09:55.887609 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j84xr" event={"ID":"21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd","Type":"ContainerStarted","Data":"8b5910c145a3cba2f5a351a909ea21d607d8c1c04ca81bcf23855c0eba821956"} Feb 02 11:09:57 crc kubenswrapper[4925]: I0202 11:09:57.904587 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wvpzr" event={"ID":"fdf9fdc0-d0bc-48eb-881f-9f053560d16d","Type":"ContainerStarted","Data":"1f6980e3e434effdd912dff691c2c106fc7adc1f1d72ab4eb019d9921bad8367"} Feb 02 11:09:57 crc kubenswrapper[4925]: I0202 11:09:57.906369 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hzmqd" event={"ID":"3d287bf3-d7ef-4ccf-ad54-c56563a8092c","Type":"ContainerStarted","Data":"c552fd41ed88492c282b834423854de29d5a570128603d27633f5c049dc453b7"} Feb 02 11:09:57 crc kubenswrapper[4925]: I0202 11:09:57.908031 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-26h6v" event={"ID":"658d0400-3726-4797-a477-8d95c17ccd3a","Type":"ContainerStarted","Data":"1ad440c18aed2f977848c683f60ab3bb14ae26f3344464083f0bc8ce17df879f"} Feb 02 11:09:57 crc kubenswrapper[4925]: I0202 11:09:57.908161 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-26h6v" Feb 02 11:09:57 crc kubenswrapper[4925]: I0202 11:09:57.909964 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j84xr" event={"ID":"21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd","Type":"ContainerStarted","Data":"a86c6fc219c9e184af7b48d57b06bb8e88eb304d7730b95c0710efc6ce9aaebc"} Feb 02 11:09:57 crc kubenswrapper[4925]: I0202 11:09:57.910138 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j84xr" Feb 02 11:09:57 crc kubenswrapper[4925]: I0202 11:09:57.918928 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wvpzr" podStartSLOduration=1.740247295 podStartE2EDuration="4.918909656s" podCreationTimestamp="2026-02-02 11:09:53 +0000 UTC" firstStartedPulling="2026-02-02 11:09:54.417714743 +0000 UTC m=+771.421963705" lastFinishedPulling="2026-02-02 11:09:57.596377104 +0000 UTC m=+774.600626066" observedRunningTime="2026-02-02 11:09:57.917666763 +0000 UTC m=+774.921915735" watchObservedRunningTime="2026-02-02 11:09:57.918909656 +0000 UTC m=+774.923158608" Feb 02 11:09:57 crc kubenswrapper[4925]: I0202 11:09:57.948581 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-26h6v" podStartSLOduration=1.515701096 podStartE2EDuration="4.94856418s" podCreationTimestamp="2026-02-02 11:09:53 +0000 UTC" firstStartedPulling="2026-02-02 11:09:54.162688088 +0000 UTC m=+771.166937040" lastFinishedPulling="2026-02-02 11:09:57.595551162 +0000 UTC m=+774.599800124" observedRunningTime="2026-02-02 11:09:57.948034276 +0000 UTC m=+774.952283238" watchObservedRunningTime="2026-02-02 11:09:57.94856418 +0000 UTC m=+774.952813142" Feb 02 11:09:57 crc kubenswrapper[4925]: I0202 11:09:57.972319 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j84xr" podStartSLOduration=2.424848089 podStartE2EDuration="4.972296755s" podCreationTimestamp="2026-02-02 11:09:53 +0000 UTC" firstStartedPulling="2026-02-02 11:09:55.143561151 +0000 UTC m=+772.147810113" lastFinishedPulling="2026-02-02 11:09:57.691009817 +0000 UTC m=+774.695258779" observedRunningTime="2026-02-02 11:09:57.962944685 +0000 UTC m=+774.967193667" watchObservedRunningTime="2026-02-02 11:09:57.972296755 +0000 UTC m=+774.976545717" Feb 02 11:09:59 crc kubenswrapper[4925]: I0202 11:09:59.922908 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hzmqd" event={"ID":"3d287bf3-d7ef-4ccf-ad54-c56563a8092c","Type":"ContainerStarted","Data":"7a18d6b117b2853652394b2e9aa2ab47fe5bf838a0dc73ff42200215a3ebc219"} Feb 02 11:10:04 crc kubenswrapper[4925]: I0202 11:10:04.151795 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-26h6v" Feb 02 11:10:04 crc kubenswrapper[4925]: I0202 11:10:04.165180 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-hzmqd" podStartSLOduration=6.016167556 podStartE2EDuration="11.165158445s" podCreationTimestamp="2026-02-02 11:09:53 +0000 UTC" firstStartedPulling="2026-02-02 11:09:54.34587929 +0000 UTC m=+771.350128252" lastFinishedPulling="2026-02-02 11:09:59.494870179 +0000 UTC m=+776.499119141" observedRunningTime="2026-02-02 11:09:59.974340463 +0000 UTC m=+776.978589425" watchObservedRunningTime="2026-02-02 11:10:04.165158445 +0000 UTC m=+781.169407407" Feb 02 11:10:04 crc kubenswrapper[4925]: I0202 11:10:04.410200 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:10:04 crc kubenswrapper[4925]: I0202 11:10:04.410647 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:10:04 crc kubenswrapper[4925]: I0202 11:10:04.416060 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:10:04 crc kubenswrapper[4925]: I0202 11:10:04.964844 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-76968fc589-cpk94" Feb 02 11:10:05 crc kubenswrapper[4925]: I0202 11:10:05.034352 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-nzwbr"] Feb 02 11:10:13 crc kubenswrapper[4925]: I0202 11:10:13.398864 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:10:13 crc kubenswrapper[4925]: I0202 11:10:13.399755 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:10:13 crc kubenswrapper[4925]: I0202 11:10:13.399860 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 11:10:13 crc kubenswrapper[4925]: I0202 11:10:13.400830 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ffca907841f0a5bec449b7e08e60cef6f7cea31a8df22b28332865ae60f507bc"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:10:13 crc kubenswrapper[4925]: I0202 11:10:13.400913 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://ffca907841f0a5bec449b7e08e60cef6f7cea31a8df22b28332865ae60f507bc" gracePeriod=600 Feb 02 11:10:14 crc kubenswrapper[4925]: I0202 11:10:14.024483 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="ffca907841f0a5bec449b7e08e60cef6f7cea31a8df22b28332865ae60f507bc" exitCode=0 Feb 02 11:10:14 crc kubenswrapper[4925]: I0202 11:10:14.024558 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"ffca907841f0a5bec449b7e08e60cef6f7cea31a8df22b28332865ae60f507bc"} Feb 02 11:10:14 crc kubenswrapper[4925]: I0202 11:10:14.025224 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"4b03a1975ff91abe6f92e545f0ab1b94a8a292e0264c3f7e53cacd130fa2f25b"} Feb 02 11:10:14 crc kubenswrapper[4925]: I0202 11:10:14.025251 4925 scope.go:117] "RemoveContainer" containerID="34ce5b38806e94c52ae2e1827e7acb76781694c5c09b9303334780fe7804194c" Feb 02 11:10:14 crc kubenswrapper[4925]: I0202 11:10:14.734198 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j84xr" Feb 02 11:10:26 crc kubenswrapper[4925]: I0202 11:10:26.992656 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns"] Feb 02 11:10:26 crc kubenswrapper[4925]: I0202 11:10:26.994436 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" Feb 02 11:10:26 crc kubenswrapper[4925]: I0202 11:10:26.996143 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 11:10:27 crc kubenswrapper[4925]: I0202 11:10:27.004785 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns"] Feb 02 11:10:27 crc kubenswrapper[4925]: I0202 11:10:27.056748 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4331a7b0-93b0-40b7-9b53-77b0664942b8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns\" (UID: \"4331a7b0-93b0-40b7-9b53-77b0664942b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" Feb 02 11:10:27 crc kubenswrapper[4925]: I0202 11:10:27.057340 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bfm2\" (UniqueName: \"kubernetes.io/projected/4331a7b0-93b0-40b7-9b53-77b0664942b8-kube-api-access-5bfm2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns\" (UID: \"4331a7b0-93b0-40b7-9b53-77b0664942b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" Feb 02 11:10:27 crc kubenswrapper[4925]: I0202 11:10:27.057376 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4331a7b0-93b0-40b7-9b53-77b0664942b8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns\" (UID: \"4331a7b0-93b0-40b7-9b53-77b0664942b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" Feb 02 11:10:27 crc kubenswrapper[4925]: I0202 11:10:27.158716 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4331a7b0-93b0-40b7-9b53-77b0664942b8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns\" (UID: \"4331a7b0-93b0-40b7-9b53-77b0664942b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" Feb 02 11:10:27 crc kubenswrapper[4925]: I0202 11:10:27.158771 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bfm2\" (UniqueName: \"kubernetes.io/projected/4331a7b0-93b0-40b7-9b53-77b0664942b8-kube-api-access-5bfm2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns\" (UID: \"4331a7b0-93b0-40b7-9b53-77b0664942b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" Feb 02 11:10:27 crc kubenswrapper[4925]: I0202 11:10:27.158796 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4331a7b0-93b0-40b7-9b53-77b0664942b8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns\" (UID: \"4331a7b0-93b0-40b7-9b53-77b0664942b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" Feb 02 11:10:27 crc kubenswrapper[4925]: I0202 11:10:27.159504 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4331a7b0-93b0-40b7-9b53-77b0664942b8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns\" (UID: \"4331a7b0-93b0-40b7-9b53-77b0664942b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" Feb 02 11:10:27 crc kubenswrapper[4925]: I0202 11:10:27.159737 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4331a7b0-93b0-40b7-9b53-77b0664942b8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns\" (UID: \"4331a7b0-93b0-40b7-9b53-77b0664942b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" Feb 02 11:10:27 crc kubenswrapper[4925]: I0202 11:10:27.183676 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bfm2\" (UniqueName: \"kubernetes.io/projected/4331a7b0-93b0-40b7-9b53-77b0664942b8-kube-api-access-5bfm2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns\" (UID: \"4331a7b0-93b0-40b7-9b53-77b0664942b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" Feb 02 11:10:27 crc kubenswrapper[4925]: I0202 11:10:27.317360 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" Feb 02 11:10:27 crc kubenswrapper[4925]: I0202 11:10:27.539808 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns"] Feb 02 11:10:28 crc kubenswrapper[4925]: I0202 11:10:28.123843 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" event={"ID":"4331a7b0-93b0-40b7-9b53-77b0664942b8","Type":"ContainerStarted","Data":"39dd2d86db2ddd3c561f3bcfb8e59d3a68b3a70f44142b1a380e20f109bd49fc"} Feb 02 11:10:29 crc kubenswrapper[4925]: I0202 11:10:29.131775 4925 generic.go:334] "Generic (PLEG): container finished" podID="4331a7b0-93b0-40b7-9b53-77b0664942b8" containerID="b37a9bec433d5c18299b64a2f0a079df553e5500c248c1660dd9ae8b784f693f" exitCode=0 Feb 02 11:10:29 crc kubenswrapper[4925]: I0202 11:10:29.131859 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" event={"ID":"4331a7b0-93b0-40b7-9b53-77b0664942b8","Type":"ContainerDied","Data":"b37a9bec433d5c18299b64a2f0a079df553e5500c248c1660dd9ae8b784f693f"} Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.092968 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-nzwbr" podUID="81af45ef-2049-4155-9c0b-ae722e6b8c8a" containerName="console" containerID="cri-o://2609d605418d80b3337b32ceee622cfd9571e3ff6c7a487ad57a7a8ceb855922" gracePeriod=15 Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.434064 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-nzwbr_81af45ef-2049-4155-9c0b-ae722e6b8c8a/console/0.log" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.434406 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.570726 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n7tfm"] Feb 02 11:10:30 crc kubenswrapper[4925]: E0202 11:10:30.571148 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81af45ef-2049-4155-9c0b-ae722e6b8c8a" containerName="console" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.571171 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="81af45ef-2049-4155-9c0b-ae722e6b8c8a" containerName="console" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.571385 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="81af45ef-2049-4155-9c0b-ae722e6b8c8a" containerName="console" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.572771 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7tfm" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.595180 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n7tfm"] Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.605012 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-config\") pod \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.605122 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-serving-cert\") pod \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.605164 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-trusted-ca-bundle\") pod \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.605191 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-oauth-serving-cert\") pod \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.605243 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tdvx\" (UniqueName: \"kubernetes.io/projected/81af45ef-2049-4155-9c0b-ae722e6b8c8a-kube-api-access-9tdvx\") pod \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.605365 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-service-ca\") pod \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.605423 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-oauth-config\") pod \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\" (UID: \"81af45ef-2049-4155-9c0b-ae722e6b8c8a\") " Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.605588 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-config" (OuterVolumeSpecName: "console-config") pod "81af45ef-2049-4155-9c0b-ae722e6b8c8a" (UID: "81af45ef-2049-4155-9c0b-ae722e6b8c8a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.605830 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "81af45ef-2049-4155-9c0b-ae722e6b8c8a" (UID: "81af45ef-2049-4155-9c0b-ae722e6b8c8a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.605842 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-service-ca" (OuterVolumeSpecName: "service-ca") pod "81af45ef-2049-4155-9c0b-ae722e6b8c8a" (UID: "81af45ef-2049-4155-9c0b-ae722e6b8c8a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.605875 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "81af45ef-2049-4155-9c0b-ae722e6b8c8a" (UID: "81af45ef-2049-4155-9c0b-ae722e6b8c8a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.605963 4925 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.619592 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81af45ef-2049-4155-9c0b-ae722e6b8c8a-kube-api-access-9tdvx" (OuterVolumeSpecName: "kube-api-access-9tdvx") pod "81af45ef-2049-4155-9c0b-ae722e6b8c8a" (UID: "81af45ef-2049-4155-9c0b-ae722e6b8c8a"). InnerVolumeSpecName "kube-api-access-9tdvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.624645 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "81af45ef-2049-4155-9c0b-ae722e6b8c8a" (UID: "81af45ef-2049-4155-9c0b-ae722e6b8c8a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.625171 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "81af45ef-2049-4155-9c0b-ae722e6b8c8a" (UID: "81af45ef-2049-4155-9c0b-ae722e6b8c8a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.706895 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkbkb\" (UniqueName: \"kubernetes.io/projected/2e817ea2-354d-42f1-bed6-64121e9a8778-kube-api-access-tkbkb\") pod \"redhat-operators-n7tfm\" (UID: \"2e817ea2-354d-42f1-bed6-64121e9a8778\") " pod="openshift-marketplace/redhat-operators-n7tfm" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.706989 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e817ea2-354d-42f1-bed6-64121e9a8778-catalog-content\") pod \"redhat-operators-n7tfm\" (UID: \"2e817ea2-354d-42f1-bed6-64121e9a8778\") " pod="openshift-marketplace/redhat-operators-n7tfm" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.707044 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e817ea2-354d-42f1-bed6-64121e9a8778-utilities\") pod \"redhat-operators-n7tfm\" (UID: \"2e817ea2-354d-42f1-bed6-64121e9a8778\") " pod="openshift-marketplace/redhat-operators-n7tfm" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.707152 4925 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.707177 4925 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.707224 4925 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81af45ef-2049-4155-9c0b-ae722e6b8c8a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.707389 4925 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.707403 4925 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81af45ef-2049-4155-9c0b-ae722e6b8c8a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.707414 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tdvx\" (UniqueName: \"kubernetes.io/projected/81af45ef-2049-4155-9c0b-ae722e6b8c8a-kube-api-access-9tdvx\") on node \"crc\" DevicePath \"\"" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.808112 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e817ea2-354d-42f1-bed6-64121e9a8778-utilities\") pod \"redhat-operators-n7tfm\" (UID: \"2e817ea2-354d-42f1-bed6-64121e9a8778\") " pod="openshift-marketplace/redhat-operators-n7tfm" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.808186 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkbkb\" (UniqueName: \"kubernetes.io/projected/2e817ea2-354d-42f1-bed6-64121e9a8778-kube-api-access-tkbkb\") pod \"redhat-operators-n7tfm\" (UID: \"2e817ea2-354d-42f1-bed6-64121e9a8778\") " pod="openshift-marketplace/redhat-operators-n7tfm" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.808262 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e817ea2-354d-42f1-bed6-64121e9a8778-catalog-content\") pod \"redhat-operators-n7tfm\" (UID: \"2e817ea2-354d-42f1-bed6-64121e9a8778\") " pod="openshift-marketplace/redhat-operators-n7tfm" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.808502 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e817ea2-354d-42f1-bed6-64121e9a8778-utilities\") pod \"redhat-operators-n7tfm\" (UID: \"2e817ea2-354d-42f1-bed6-64121e9a8778\") " pod="openshift-marketplace/redhat-operators-n7tfm" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.808610 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e817ea2-354d-42f1-bed6-64121e9a8778-catalog-content\") pod \"redhat-operators-n7tfm\" (UID: \"2e817ea2-354d-42f1-bed6-64121e9a8778\") " pod="openshift-marketplace/redhat-operators-n7tfm" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.824905 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkbkb\" (UniqueName: \"kubernetes.io/projected/2e817ea2-354d-42f1-bed6-64121e9a8778-kube-api-access-tkbkb\") pod \"redhat-operators-n7tfm\" (UID: \"2e817ea2-354d-42f1-bed6-64121e9a8778\") " pod="openshift-marketplace/redhat-operators-n7tfm" Feb 02 11:10:30 crc kubenswrapper[4925]: I0202 11:10:30.889926 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7tfm" Feb 02 11:10:31 crc kubenswrapper[4925]: I0202 11:10:31.084951 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n7tfm"] Feb 02 11:10:31 crc kubenswrapper[4925]: W0202 11:10:31.092793 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e817ea2_354d_42f1_bed6_64121e9a8778.slice/crio-9c66f976069f54a3459db8c5b2be7a9b1a5cdfafbd4d21a1d0cc850e124a1434 WatchSource:0}: Error finding container 9c66f976069f54a3459db8c5b2be7a9b1a5cdfafbd4d21a1d0cc850e124a1434: Status 404 returned error can't find the container with id 9c66f976069f54a3459db8c5b2be7a9b1a5cdfafbd4d21a1d0cc850e124a1434 Feb 02 11:10:31 crc kubenswrapper[4925]: I0202 11:10:31.150049 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-nzwbr_81af45ef-2049-4155-9c0b-ae722e6b8c8a/console/0.log" Feb 02 11:10:31 crc kubenswrapper[4925]: I0202 11:10:31.150119 4925 generic.go:334] "Generic (PLEG): container finished" podID="81af45ef-2049-4155-9c0b-ae722e6b8c8a" containerID="2609d605418d80b3337b32ceee622cfd9571e3ff6c7a487ad57a7a8ceb855922" exitCode=2 Feb 02 11:10:31 crc kubenswrapper[4925]: I0202 11:10:31.150166 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nzwbr" event={"ID":"81af45ef-2049-4155-9c0b-ae722e6b8c8a","Type":"ContainerDied","Data":"2609d605418d80b3337b32ceee622cfd9571e3ff6c7a487ad57a7a8ceb855922"} Feb 02 11:10:31 crc kubenswrapper[4925]: I0202 11:10:31.150191 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nzwbr" event={"ID":"81af45ef-2049-4155-9c0b-ae722e6b8c8a","Type":"ContainerDied","Data":"206c0cabd9a7fe7bb12dafe0a1faf7730ab3796a5ad870dd401211c46d371b02"} Feb 02 11:10:31 crc kubenswrapper[4925]: I0202 11:10:31.150207 4925 scope.go:117] "RemoveContainer" containerID="2609d605418d80b3337b32ceee622cfd9571e3ff6c7a487ad57a7a8ceb855922" Feb 02 11:10:31 crc kubenswrapper[4925]: I0202 11:10:31.150297 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nzwbr" Feb 02 11:10:31 crc kubenswrapper[4925]: I0202 11:10:31.152795 4925 generic.go:334] "Generic (PLEG): container finished" podID="4331a7b0-93b0-40b7-9b53-77b0664942b8" containerID="d743500689cf15408ba0e2baed3db2026e604a2d395d46a4d43f1903e026fdca" exitCode=0 Feb 02 11:10:31 crc kubenswrapper[4925]: I0202 11:10:31.152855 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" event={"ID":"4331a7b0-93b0-40b7-9b53-77b0664942b8","Type":"ContainerDied","Data":"d743500689cf15408ba0e2baed3db2026e604a2d395d46a4d43f1903e026fdca"} Feb 02 11:10:31 crc kubenswrapper[4925]: I0202 11:10:31.155978 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7tfm" event={"ID":"2e817ea2-354d-42f1-bed6-64121e9a8778","Type":"ContainerStarted","Data":"9c66f976069f54a3459db8c5b2be7a9b1a5cdfafbd4d21a1d0cc850e124a1434"} Feb 02 11:10:31 crc kubenswrapper[4925]: I0202 11:10:31.183533 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-nzwbr"] Feb 02 11:10:31 crc kubenswrapper[4925]: I0202 11:10:31.188218 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-nzwbr"] Feb 02 11:10:31 crc kubenswrapper[4925]: I0202 11:10:31.188429 4925 scope.go:117] "RemoveContainer" containerID="2609d605418d80b3337b32ceee622cfd9571e3ff6c7a487ad57a7a8ceb855922" Feb 02 11:10:31 crc kubenswrapper[4925]: E0202 11:10:31.188895 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2609d605418d80b3337b32ceee622cfd9571e3ff6c7a487ad57a7a8ceb855922\": container with ID starting with 2609d605418d80b3337b32ceee622cfd9571e3ff6c7a487ad57a7a8ceb855922 not found: ID does not exist" containerID="2609d605418d80b3337b32ceee622cfd9571e3ff6c7a487ad57a7a8ceb855922" Feb 02 11:10:31 crc kubenswrapper[4925]: I0202 11:10:31.188933 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2609d605418d80b3337b32ceee622cfd9571e3ff6c7a487ad57a7a8ceb855922"} err="failed to get container status \"2609d605418d80b3337b32ceee622cfd9571e3ff6c7a487ad57a7a8ceb855922\": rpc error: code = NotFound desc = could not find container \"2609d605418d80b3337b32ceee622cfd9571e3ff6c7a487ad57a7a8ceb855922\": container with ID starting with 2609d605418d80b3337b32ceee622cfd9571e3ff6c7a487ad57a7a8ceb855922 not found: ID does not exist" Feb 02 11:10:32 crc kubenswrapper[4925]: I0202 11:10:32.171611 4925 generic.go:334] "Generic (PLEG): container finished" podID="4331a7b0-93b0-40b7-9b53-77b0664942b8" containerID="31a35f95e580d51c41fc717bf2dd9e73b704db2272af27dc1c66418d455c9654" exitCode=0 Feb 02 11:10:32 crc kubenswrapper[4925]: I0202 11:10:32.171732 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" event={"ID":"4331a7b0-93b0-40b7-9b53-77b0664942b8","Type":"ContainerDied","Data":"31a35f95e580d51c41fc717bf2dd9e73b704db2272af27dc1c66418d455c9654"} Feb 02 11:10:32 crc kubenswrapper[4925]: I0202 11:10:32.173985 4925 generic.go:334] "Generic (PLEG): container finished" podID="2e817ea2-354d-42f1-bed6-64121e9a8778" containerID="a20a6c8a0174f4ef7283ab4befd3fab0ddc56f87d9a68e83df9ff38fdc2b7934" exitCode=0 Feb 02 11:10:32 crc kubenswrapper[4925]: I0202 11:10:32.174032 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7tfm" event={"ID":"2e817ea2-354d-42f1-bed6-64121e9a8778","Type":"ContainerDied","Data":"a20a6c8a0174f4ef7283ab4befd3fab0ddc56f87d9a68e83df9ff38fdc2b7934"} Feb 02 11:10:32 crc kubenswrapper[4925]: I0202 11:10:32.672836 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81af45ef-2049-4155-9c0b-ae722e6b8c8a" path="/var/lib/kubelet/pods/81af45ef-2049-4155-9c0b-ae722e6b8c8a/volumes" Feb 02 11:10:33 crc kubenswrapper[4925]: I0202 11:10:33.419423 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" Feb 02 11:10:33 crc kubenswrapper[4925]: I0202 11:10:33.548243 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bfm2\" (UniqueName: \"kubernetes.io/projected/4331a7b0-93b0-40b7-9b53-77b0664942b8-kube-api-access-5bfm2\") pod \"4331a7b0-93b0-40b7-9b53-77b0664942b8\" (UID: \"4331a7b0-93b0-40b7-9b53-77b0664942b8\") " Feb 02 11:10:33 crc kubenswrapper[4925]: I0202 11:10:33.548329 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4331a7b0-93b0-40b7-9b53-77b0664942b8-util\") pod \"4331a7b0-93b0-40b7-9b53-77b0664942b8\" (UID: \"4331a7b0-93b0-40b7-9b53-77b0664942b8\") " Feb 02 11:10:33 crc kubenswrapper[4925]: I0202 11:10:33.548379 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4331a7b0-93b0-40b7-9b53-77b0664942b8-bundle\") pod \"4331a7b0-93b0-40b7-9b53-77b0664942b8\" (UID: \"4331a7b0-93b0-40b7-9b53-77b0664942b8\") " Feb 02 11:10:33 crc kubenswrapper[4925]: I0202 11:10:33.549820 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4331a7b0-93b0-40b7-9b53-77b0664942b8-bundle" (OuterVolumeSpecName: "bundle") pod "4331a7b0-93b0-40b7-9b53-77b0664942b8" (UID: "4331a7b0-93b0-40b7-9b53-77b0664942b8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:10:33 crc kubenswrapper[4925]: I0202 11:10:33.557426 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4331a7b0-93b0-40b7-9b53-77b0664942b8-kube-api-access-5bfm2" (OuterVolumeSpecName: "kube-api-access-5bfm2") pod "4331a7b0-93b0-40b7-9b53-77b0664942b8" (UID: "4331a7b0-93b0-40b7-9b53-77b0664942b8"). InnerVolumeSpecName "kube-api-access-5bfm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:10:33 crc kubenswrapper[4925]: I0202 11:10:33.562928 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4331a7b0-93b0-40b7-9b53-77b0664942b8-util" (OuterVolumeSpecName: "util") pod "4331a7b0-93b0-40b7-9b53-77b0664942b8" (UID: "4331a7b0-93b0-40b7-9b53-77b0664942b8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:10:33 crc kubenswrapper[4925]: I0202 11:10:33.649757 4925 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4331a7b0-93b0-40b7-9b53-77b0664942b8-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:10:33 crc kubenswrapper[4925]: I0202 11:10:33.649801 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bfm2\" (UniqueName: \"kubernetes.io/projected/4331a7b0-93b0-40b7-9b53-77b0664942b8-kube-api-access-5bfm2\") on node \"crc\" DevicePath \"\"" Feb 02 11:10:33 crc kubenswrapper[4925]: I0202 11:10:33.649815 4925 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4331a7b0-93b0-40b7-9b53-77b0664942b8-util\") on node \"crc\" DevicePath \"\"" Feb 02 11:10:34 crc kubenswrapper[4925]: I0202 11:10:34.185247 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" Feb 02 11:10:34 crc kubenswrapper[4925]: I0202 11:10:34.185246 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns" event={"ID":"4331a7b0-93b0-40b7-9b53-77b0664942b8","Type":"ContainerDied","Data":"39dd2d86db2ddd3c561f3bcfb8e59d3a68b3a70f44142b1a380e20f109bd49fc"} Feb 02 11:10:34 crc kubenswrapper[4925]: I0202 11:10:34.185342 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39dd2d86db2ddd3c561f3bcfb8e59d3a68b3a70f44142b1a380e20f109bd49fc" Feb 02 11:10:34 crc kubenswrapper[4925]: I0202 11:10:34.187398 4925 generic.go:334] "Generic (PLEG): container finished" podID="2e817ea2-354d-42f1-bed6-64121e9a8778" containerID="15bbc985d7e4dbaabbb5268da2d969a761c086916836633593f6cb9cc08a3e7e" exitCode=0 Feb 02 11:10:34 crc kubenswrapper[4925]: I0202 11:10:34.187448 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7tfm" event={"ID":"2e817ea2-354d-42f1-bed6-64121e9a8778","Type":"ContainerDied","Data":"15bbc985d7e4dbaabbb5268da2d969a761c086916836633593f6cb9cc08a3e7e"} Feb 02 11:10:35 crc kubenswrapper[4925]: I0202 11:10:35.194056 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7tfm" event={"ID":"2e817ea2-354d-42f1-bed6-64121e9a8778","Type":"ContainerStarted","Data":"58e0c565f8acbb36f125b80f0b34bbec9b8324369d5958fda9ba18997f50567d"} Feb 02 11:10:35 crc kubenswrapper[4925]: I0202 11:10:35.214907 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n7tfm" podStartSLOduration=2.795336774 podStartE2EDuration="5.214888776s" podCreationTimestamp="2026-02-02 11:10:30 +0000 UTC" firstStartedPulling="2026-02-02 11:10:32.17838118 +0000 UTC m=+809.182630142" lastFinishedPulling="2026-02-02 11:10:34.597933172 +0000 UTC m=+811.602182144" observedRunningTime="2026-02-02 11:10:35.212507432 +0000 UTC m=+812.216756394" watchObservedRunningTime="2026-02-02 11:10:35.214888776 +0000 UTC m=+812.219137738" Feb 02 11:10:40 crc kubenswrapper[4925]: I0202 11:10:40.893052 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n7tfm" Feb 02 11:10:40 crc kubenswrapper[4925]: I0202 11:10:40.893566 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n7tfm" Feb 02 11:10:40 crc kubenswrapper[4925]: I0202 11:10:40.930936 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n7tfm" Feb 02 11:10:41 crc kubenswrapper[4925]: I0202 11:10:41.258067 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n7tfm" Feb 02 11:10:41 crc kubenswrapper[4925]: I0202 11:10:41.544790 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n7tfm"] Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.067432 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm"] Feb 02 11:10:42 crc kubenswrapper[4925]: E0202 11:10:42.067932 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4331a7b0-93b0-40b7-9b53-77b0664942b8" containerName="util" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.067945 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="4331a7b0-93b0-40b7-9b53-77b0664942b8" containerName="util" Feb 02 11:10:42 crc kubenswrapper[4925]: E0202 11:10:42.067964 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4331a7b0-93b0-40b7-9b53-77b0664942b8" containerName="pull" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.067969 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="4331a7b0-93b0-40b7-9b53-77b0664942b8" containerName="pull" Feb 02 11:10:42 crc kubenswrapper[4925]: E0202 11:10:42.067980 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4331a7b0-93b0-40b7-9b53-77b0664942b8" containerName="extract" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.067985 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="4331a7b0-93b0-40b7-9b53-77b0664942b8" containerName="extract" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.068136 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="4331a7b0-93b0-40b7-9b53-77b0664942b8" containerName="extract" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.068590 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.071613 4925 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.071647 4925 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-cm6q5" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.073414 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.073415 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.075234 4925 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.090849 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm"] Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.253101 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f1c0635-1bd7-4997-b0bd-5f57e7bd2893-webhook-cert\") pod \"metallb-operator-controller-manager-7c47d49988-6g6jm\" (UID: \"5f1c0635-1bd7-4997-b0bd-5f57e7bd2893\") " pod="metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.253308 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f1c0635-1bd7-4997-b0bd-5f57e7bd2893-apiservice-cert\") pod \"metallb-operator-controller-manager-7c47d49988-6g6jm\" (UID: \"5f1c0635-1bd7-4997-b0bd-5f57e7bd2893\") " pod="metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.253413 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nffxs\" (UniqueName: \"kubernetes.io/projected/5f1c0635-1bd7-4997-b0bd-5f57e7bd2893-kube-api-access-nffxs\") pod \"metallb-operator-controller-manager-7c47d49988-6g6jm\" (UID: \"5f1c0635-1bd7-4997-b0bd-5f57e7bd2893\") " pod="metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.354770 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nffxs\" (UniqueName: \"kubernetes.io/projected/5f1c0635-1bd7-4997-b0bd-5f57e7bd2893-kube-api-access-nffxs\") pod \"metallb-operator-controller-manager-7c47d49988-6g6jm\" (UID: \"5f1c0635-1bd7-4997-b0bd-5f57e7bd2893\") " pod="metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.354858 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f1c0635-1bd7-4997-b0bd-5f57e7bd2893-webhook-cert\") pod \"metallb-operator-controller-manager-7c47d49988-6g6jm\" (UID: \"5f1c0635-1bd7-4997-b0bd-5f57e7bd2893\") " pod="metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.354904 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f1c0635-1bd7-4997-b0bd-5f57e7bd2893-apiservice-cert\") pod \"metallb-operator-controller-manager-7c47d49988-6g6jm\" (UID: \"5f1c0635-1bd7-4997-b0bd-5f57e7bd2893\") " pod="metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.362754 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f1c0635-1bd7-4997-b0bd-5f57e7bd2893-webhook-cert\") pod \"metallb-operator-controller-manager-7c47d49988-6g6jm\" (UID: \"5f1c0635-1bd7-4997-b0bd-5f57e7bd2893\") " pod="metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.362849 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f1c0635-1bd7-4997-b0bd-5f57e7bd2893-apiservice-cert\") pod \"metallb-operator-controller-manager-7c47d49988-6g6jm\" (UID: \"5f1c0635-1bd7-4997-b0bd-5f57e7bd2893\") " pod="metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.375250 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nffxs\" (UniqueName: \"kubernetes.io/projected/5f1c0635-1bd7-4997-b0bd-5f57e7bd2893-kube-api-access-nffxs\") pod \"metallb-operator-controller-manager-7c47d49988-6g6jm\" (UID: \"5f1c0635-1bd7-4997-b0bd-5f57e7bd2893\") " pod="metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.386120 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.448315 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq"] Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.448937 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.452722 4925 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-pzn5q" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.454002 4925 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.454501 4925 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.466985 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq"] Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.562264 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0876b510-fef0-4243-b650-8369e62c4a93-webhook-cert\") pod \"metallb-operator-webhook-server-5754578b6f-nb2dq\" (UID: \"0876b510-fef0-4243-b650-8369e62c4a93\") " pod="metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.562642 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f72j6\" (UniqueName: \"kubernetes.io/projected/0876b510-fef0-4243-b650-8369e62c4a93-kube-api-access-f72j6\") pod \"metallb-operator-webhook-server-5754578b6f-nb2dq\" (UID: \"0876b510-fef0-4243-b650-8369e62c4a93\") " pod="metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.562676 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0876b510-fef0-4243-b650-8369e62c4a93-apiservice-cert\") pod \"metallb-operator-webhook-server-5754578b6f-nb2dq\" (UID: \"0876b510-fef0-4243-b650-8369e62c4a93\") " pod="metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.663230 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0876b510-fef0-4243-b650-8369e62c4a93-webhook-cert\") pod \"metallb-operator-webhook-server-5754578b6f-nb2dq\" (UID: \"0876b510-fef0-4243-b650-8369e62c4a93\") " pod="metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.663347 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f72j6\" (UniqueName: \"kubernetes.io/projected/0876b510-fef0-4243-b650-8369e62c4a93-kube-api-access-f72j6\") pod \"metallb-operator-webhook-server-5754578b6f-nb2dq\" (UID: \"0876b510-fef0-4243-b650-8369e62c4a93\") " pod="metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.663389 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0876b510-fef0-4243-b650-8369e62c4a93-apiservice-cert\") pod \"metallb-operator-webhook-server-5754578b6f-nb2dq\" (UID: \"0876b510-fef0-4243-b650-8369e62c4a93\") " pod="metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.685365 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0876b510-fef0-4243-b650-8369e62c4a93-apiservice-cert\") pod \"metallb-operator-webhook-server-5754578b6f-nb2dq\" (UID: \"0876b510-fef0-4243-b650-8369e62c4a93\") " pod="metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.685372 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0876b510-fef0-4243-b650-8369e62c4a93-webhook-cert\") pod \"metallb-operator-webhook-server-5754578b6f-nb2dq\" (UID: \"0876b510-fef0-4243-b650-8369e62c4a93\") " pod="metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.686813 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f72j6\" (UniqueName: \"kubernetes.io/projected/0876b510-fef0-4243-b650-8369e62c4a93-kube-api-access-f72j6\") pod \"metallb-operator-webhook-server-5754578b6f-nb2dq\" (UID: \"0876b510-fef0-4243-b650-8369e62c4a93\") " pod="metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.765211 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq" Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.873852 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm"] Feb 02 11:10:42 crc kubenswrapper[4925]: W0202 11:10:42.881682 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f1c0635_1bd7_4997_b0bd_5f57e7bd2893.slice/crio-da83b1c6e3210fa9d9776bcdddd2a57a8d534994069680d7a6ff549dad845128 WatchSource:0}: Error finding container da83b1c6e3210fa9d9776bcdddd2a57a8d534994069680d7a6ff549dad845128: Status 404 returned error can't find the container with id da83b1c6e3210fa9d9776bcdddd2a57a8d534994069680d7a6ff549dad845128 Feb 02 11:10:42 crc kubenswrapper[4925]: I0202 11:10:42.980937 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq"] Feb 02 11:10:42 crc kubenswrapper[4925]: W0202 11:10:42.996102 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0876b510_fef0_4243_b650_8369e62c4a93.slice/crio-2726f851d504e91fb8a0a4b27a74d97eac34c872c3d0bd48bba5bda91f6bb965 WatchSource:0}: Error finding container 2726f851d504e91fb8a0a4b27a74d97eac34c872c3d0bd48bba5bda91f6bb965: Status 404 returned error can't find the container with id 2726f851d504e91fb8a0a4b27a74d97eac34c872c3d0bd48bba5bda91f6bb965 Feb 02 11:10:43 crc kubenswrapper[4925]: I0202 11:10:43.235480 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq" event={"ID":"0876b510-fef0-4243-b650-8369e62c4a93","Type":"ContainerStarted","Data":"2726f851d504e91fb8a0a4b27a74d97eac34c872c3d0bd48bba5bda91f6bb965"} Feb 02 11:10:43 crc kubenswrapper[4925]: I0202 11:10:43.236866 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm" event={"ID":"5f1c0635-1bd7-4997-b0bd-5f57e7bd2893","Type":"ContainerStarted","Data":"da83b1c6e3210fa9d9776bcdddd2a57a8d534994069680d7a6ff549dad845128"} Feb 02 11:10:43 crc kubenswrapper[4925]: I0202 11:10:43.237037 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n7tfm" podUID="2e817ea2-354d-42f1-bed6-64121e9a8778" containerName="registry-server" containerID="cri-o://58e0c565f8acbb36f125b80f0b34bbec9b8324369d5958fda9ba18997f50567d" gracePeriod=2 Feb 02 11:10:43 crc kubenswrapper[4925]: I0202 11:10:43.623270 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7tfm" Feb 02 11:10:43 crc kubenswrapper[4925]: I0202 11:10:43.775330 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkbkb\" (UniqueName: \"kubernetes.io/projected/2e817ea2-354d-42f1-bed6-64121e9a8778-kube-api-access-tkbkb\") pod \"2e817ea2-354d-42f1-bed6-64121e9a8778\" (UID: \"2e817ea2-354d-42f1-bed6-64121e9a8778\") " Feb 02 11:10:43 crc kubenswrapper[4925]: I0202 11:10:43.775452 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e817ea2-354d-42f1-bed6-64121e9a8778-catalog-content\") pod \"2e817ea2-354d-42f1-bed6-64121e9a8778\" (UID: \"2e817ea2-354d-42f1-bed6-64121e9a8778\") " Feb 02 11:10:43 crc kubenswrapper[4925]: I0202 11:10:43.775497 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e817ea2-354d-42f1-bed6-64121e9a8778-utilities\") pod \"2e817ea2-354d-42f1-bed6-64121e9a8778\" (UID: \"2e817ea2-354d-42f1-bed6-64121e9a8778\") " Feb 02 11:10:43 crc kubenswrapper[4925]: I0202 11:10:43.776517 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e817ea2-354d-42f1-bed6-64121e9a8778-utilities" (OuterVolumeSpecName: "utilities") pod "2e817ea2-354d-42f1-bed6-64121e9a8778" (UID: "2e817ea2-354d-42f1-bed6-64121e9a8778"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:10:43 crc kubenswrapper[4925]: I0202 11:10:43.780099 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e817ea2-354d-42f1-bed6-64121e9a8778-kube-api-access-tkbkb" (OuterVolumeSpecName: "kube-api-access-tkbkb") pod "2e817ea2-354d-42f1-bed6-64121e9a8778" (UID: "2e817ea2-354d-42f1-bed6-64121e9a8778"). InnerVolumeSpecName "kube-api-access-tkbkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:10:43 crc kubenswrapper[4925]: I0202 11:10:43.876964 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkbkb\" (UniqueName: \"kubernetes.io/projected/2e817ea2-354d-42f1-bed6-64121e9a8778-kube-api-access-tkbkb\") on node \"crc\" DevicePath \"\"" Feb 02 11:10:43 crc kubenswrapper[4925]: I0202 11:10:43.876995 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e817ea2-354d-42f1-bed6-64121e9a8778-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:10:43 crc kubenswrapper[4925]: I0202 11:10:43.902569 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e817ea2-354d-42f1-bed6-64121e9a8778-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e817ea2-354d-42f1-bed6-64121e9a8778" (UID: "2e817ea2-354d-42f1-bed6-64121e9a8778"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:10:43 crc kubenswrapper[4925]: I0202 11:10:43.977990 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e817ea2-354d-42f1-bed6-64121e9a8778-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:10:44 crc kubenswrapper[4925]: I0202 11:10:44.252953 4925 generic.go:334] "Generic (PLEG): container finished" podID="2e817ea2-354d-42f1-bed6-64121e9a8778" containerID="58e0c565f8acbb36f125b80f0b34bbec9b8324369d5958fda9ba18997f50567d" exitCode=0 Feb 02 11:10:44 crc kubenswrapper[4925]: I0202 11:10:44.253002 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7tfm" event={"ID":"2e817ea2-354d-42f1-bed6-64121e9a8778","Type":"ContainerDied","Data":"58e0c565f8acbb36f125b80f0b34bbec9b8324369d5958fda9ba18997f50567d"} Feb 02 11:10:44 crc kubenswrapper[4925]: I0202 11:10:44.253036 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7tfm" event={"ID":"2e817ea2-354d-42f1-bed6-64121e9a8778","Type":"ContainerDied","Data":"9c66f976069f54a3459db8c5b2be7a9b1a5cdfafbd4d21a1d0cc850e124a1434"} Feb 02 11:10:44 crc kubenswrapper[4925]: I0202 11:10:44.253060 4925 scope.go:117] "RemoveContainer" containerID="58e0c565f8acbb36f125b80f0b34bbec9b8324369d5958fda9ba18997f50567d" Feb 02 11:10:44 crc kubenswrapper[4925]: I0202 11:10:44.253215 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7tfm" Feb 02 11:10:44 crc kubenswrapper[4925]: I0202 11:10:44.300573 4925 scope.go:117] "RemoveContainer" containerID="15bbc985d7e4dbaabbb5268da2d969a761c086916836633593f6cb9cc08a3e7e" Feb 02 11:10:44 crc kubenswrapper[4925]: I0202 11:10:44.304456 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n7tfm"] Feb 02 11:10:44 crc kubenswrapper[4925]: I0202 11:10:44.313110 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n7tfm"] Feb 02 11:10:44 crc kubenswrapper[4925]: I0202 11:10:44.317840 4925 scope.go:117] "RemoveContainer" containerID="a20a6c8a0174f4ef7283ab4befd3fab0ddc56f87d9a68e83df9ff38fdc2b7934" Feb 02 11:10:44 crc kubenswrapper[4925]: I0202 11:10:44.353469 4925 scope.go:117] "RemoveContainer" containerID="58e0c565f8acbb36f125b80f0b34bbec9b8324369d5958fda9ba18997f50567d" Feb 02 11:10:44 crc kubenswrapper[4925]: E0202 11:10:44.353868 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58e0c565f8acbb36f125b80f0b34bbec9b8324369d5958fda9ba18997f50567d\": container with ID starting with 58e0c565f8acbb36f125b80f0b34bbec9b8324369d5958fda9ba18997f50567d not found: ID does not exist" containerID="58e0c565f8acbb36f125b80f0b34bbec9b8324369d5958fda9ba18997f50567d" Feb 02 11:10:44 crc kubenswrapper[4925]: I0202 11:10:44.353899 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58e0c565f8acbb36f125b80f0b34bbec9b8324369d5958fda9ba18997f50567d"} err="failed to get container status \"58e0c565f8acbb36f125b80f0b34bbec9b8324369d5958fda9ba18997f50567d\": rpc error: code = NotFound desc = could not find container \"58e0c565f8acbb36f125b80f0b34bbec9b8324369d5958fda9ba18997f50567d\": container with ID starting with 58e0c565f8acbb36f125b80f0b34bbec9b8324369d5958fda9ba18997f50567d not found: ID does not exist" Feb 02 11:10:44 crc kubenswrapper[4925]: I0202 11:10:44.353925 4925 scope.go:117] "RemoveContainer" containerID="15bbc985d7e4dbaabbb5268da2d969a761c086916836633593f6cb9cc08a3e7e" Feb 02 11:10:44 crc kubenswrapper[4925]: E0202 11:10:44.354311 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15bbc985d7e4dbaabbb5268da2d969a761c086916836633593f6cb9cc08a3e7e\": container with ID starting with 15bbc985d7e4dbaabbb5268da2d969a761c086916836633593f6cb9cc08a3e7e not found: ID does not exist" containerID="15bbc985d7e4dbaabbb5268da2d969a761c086916836633593f6cb9cc08a3e7e" Feb 02 11:10:44 crc kubenswrapper[4925]: I0202 11:10:44.354434 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15bbc985d7e4dbaabbb5268da2d969a761c086916836633593f6cb9cc08a3e7e"} err="failed to get container status \"15bbc985d7e4dbaabbb5268da2d969a761c086916836633593f6cb9cc08a3e7e\": rpc error: code = NotFound desc = could not find container \"15bbc985d7e4dbaabbb5268da2d969a761c086916836633593f6cb9cc08a3e7e\": container with ID starting with 15bbc985d7e4dbaabbb5268da2d969a761c086916836633593f6cb9cc08a3e7e not found: ID does not exist" Feb 02 11:10:44 crc kubenswrapper[4925]: I0202 11:10:44.354533 4925 scope.go:117] "RemoveContainer" containerID="a20a6c8a0174f4ef7283ab4befd3fab0ddc56f87d9a68e83df9ff38fdc2b7934" Feb 02 11:10:44 crc kubenswrapper[4925]: E0202 11:10:44.354880 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a20a6c8a0174f4ef7283ab4befd3fab0ddc56f87d9a68e83df9ff38fdc2b7934\": container with ID starting with a20a6c8a0174f4ef7283ab4befd3fab0ddc56f87d9a68e83df9ff38fdc2b7934 not found: ID does not exist" containerID="a20a6c8a0174f4ef7283ab4befd3fab0ddc56f87d9a68e83df9ff38fdc2b7934" Feb 02 11:10:44 crc kubenswrapper[4925]: I0202 11:10:44.354907 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a20a6c8a0174f4ef7283ab4befd3fab0ddc56f87d9a68e83df9ff38fdc2b7934"} err="failed to get container status \"a20a6c8a0174f4ef7283ab4befd3fab0ddc56f87d9a68e83df9ff38fdc2b7934\": rpc error: code = NotFound desc = could not find container \"a20a6c8a0174f4ef7283ab4befd3fab0ddc56f87d9a68e83df9ff38fdc2b7934\": container with ID starting with a20a6c8a0174f4ef7283ab4befd3fab0ddc56f87d9a68e83df9ff38fdc2b7934 not found: ID does not exist" Feb 02 11:10:44 crc kubenswrapper[4925]: I0202 11:10:44.687707 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e817ea2-354d-42f1-bed6-64121e9a8778" path="/var/lib/kubelet/pods/2e817ea2-354d-42f1-bed6-64121e9a8778/volumes" Feb 02 11:10:49 crc kubenswrapper[4925]: I0202 11:10:49.283447 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm" event={"ID":"5f1c0635-1bd7-4997-b0bd-5f57e7bd2893","Type":"ContainerStarted","Data":"ec8147f80c3d9882426a7002a7ae9f13dbecab26f20b6951f0b6a043ae03aed8"} Feb 02 11:10:49 crc kubenswrapper[4925]: I0202 11:10:49.283989 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm" Feb 02 11:10:49 crc kubenswrapper[4925]: I0202 11:10:49.285209 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq" event={"ID":"0876b510-fef0-4243-b650-8369e62c4a93","Type":"ContainerStarted","Data":"cdb13e3f5899c7c5b3525469dedc472983cf36bc14b4cfad6151425337255dfd"} Feb 02 11:10:49 crc kubenswrapper[4925]: I0202 11:10:49.285796 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq" Feb 02 11:10:49 crc kubenswrapper[4925]: I0202 11:10:49.305521 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm" podStartSLOduration=1.47774608 podStartE2EDuration="7.305502446s" podCreationTimestamp="2026-02-02 11:10:42 +0000 UTC" firstStartedPulling="2026-02-02 11:10:42.890307217 +0000 UTC m=+819.894556169" lastFinishedPulling="2026-02-02 11:10:48.718063573 +0000 UTC m=+825.722312535" observedRunningTime="2026-02-02 11:10:49.304019927 +0000 UTC m=+826.308268899" watchObservedRunningTime="2026-02-02 11:10:49.305502446 +0000 UTC m=+826.309751408" Feb 02 11:10:49 crc kubenswrapper[4925]: I0202 11:10:49.328038 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq" podStartSLOduration=1.592191734 podStartE2EDuration="7.328016459s" podCreationTimestamp="2026-02-02 11:10:42 +0000 UTC" firstStartedPulling="2026-02-02 11:10:42.998866353 +0000 UTC m=+820.003115325" lastFinishedPulling="2026-02-02 11:10:48.734691078 +0000 UTC m=+825.738940050" observedRunningTime="2026-02-02 11:10:49.325099091 +0000 UTC m=+826.329348073" watchObservedRunningTime="2026-02-02 11:10:49.328016459 +0000 UTC m=+826.332265421" Feb 02 11:11:02 crc kubenswrapper[4925]: I0202 11:11:02.769478 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5754578b6f-nb2dq" Feb 02 11:11:22 crc kubenswrapper[4925]: I0202 11:11:22.389014 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7c47d49988-6g6jm" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.064766 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-qx9hq"] Feb 02 11:11:23 crc kubenswrapper[4925]: E0202 11:11:23.065302 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e817ea2-354d-42f1-bed6-64121e9a8778" containerName="extract-content" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.065317 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e817ea2-354d-42f1-bed6-64121e9a8778" containerName="extract-content" Feb 02 11:11:23 crc kubenswrapper[4925]: E0202 11:11:23.065343 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e817ea2-354d-42f1-bed6-64121e9a8778" containerName="extract-utilities" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.065350 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e817ea2-354d-42f1-bed6-64121e9a8778" containerName="extract-utilities" Feb 02 11:11:23 crc kubenswrapper[4925]: E0202 11:11:23.065359 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e817ea2-354d-42f1-bed6-64121e9a8778" containerName="registry-server" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.065365 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e817ea2-354d-42f1-bed6-64121e9a8778" containerName="registry-server" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.065535 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e817ea2-354d-42f1-bed6-64121e9a8778" containerName="registry-server" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.067474 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.068577 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5fpmg"] Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.069490 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5fpmg" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.077565 4925 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.077868 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.078133 4925 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-pk6r2" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.078281 4925 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.087132 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5fpmg"] Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.156962 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-dqhvw"] Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.157978 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-dqhvw" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.163433 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.163510 4925 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.163642 4925 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.163771 4925 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-q6nrg" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.180487 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-metallb-excludel2\") pod \"speaker-dqhvw\" (UID: \"263f4c60-783f-4109-bcf6-cbdd5e03ec0e\") " pod="metallb-system/speaker-dqhvw" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.180530 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-metrics\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.180565 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqbbc\" (UniqueName: \"kubernetes.io/projected/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-kube-api-access-lqbbc\") pod \"speaker-dqhvw\" (UID: \"263f4c60-783f-4109-bcf6-cbdd5e03ec0e\") " pod="metallb-system/speaker-dqhvw" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.180580 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-metrics-certs\") pod \"speaker-dqhvw\" (UID: \"263f4c60-783f-4109-bcf6-cbdd5e03ec0e\") " pod="metallb-system/speaker-dqhvw" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.180598 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-frr-startup\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.180629 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8726l\" (UniqueName: \"kubernetes.io/projected/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-kube-api-access-8726l\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.180651 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-frr-sockets\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.180666 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-metrics-certs\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.180683 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-frr-conf\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.180705 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-memberlist\") pod \"speaker-dqhvw\" (UID: \"263f4c60-783f-4109-bcf6-cbdd5e03ec0e\") " pod="metallb-system/speaker-dqhvw" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.180722 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa78a5ba-04ae-4ff3-85f1-6c95530e3ff2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5fpmg\" (UID: \"fa78a5ba-04ae-4ff3-85f1-6c95530e3ff2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5fpmg" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.180747 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lrxb\" (UniqueName: \"kubernetes.io/projected/fa78a5ba-04ae-4ff3-85f1-6c95530e3ff2-kube-api-access-4lrxb\") pod \"frr-k8s-webhook-server-7df86c4f6c-5fpmg\" (UID: \"fa78a5ba-04ae-4ff3-85f1-6c95530e3ff2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5fpmg" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.180771 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-reloader\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.183146 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-t7z6x"] Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.184400 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-t7z6x" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.190014 4925 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.201538 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-t7z6x"] Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.281600 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-frr-sockets\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.281646 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-metrics-certs\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.281665 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-frr-conf\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.281687 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa78a5ba-04ae-4ff3-85f1-6c95530e3ff2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5fpmg\" (UID: \"fa78a5ba-04ae-4ff3-85f1-6c95530e3ff2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5fpmg" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.281707 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09785fed-de18-4a9b-b32f-8a3644ede917-cert\") pod \"controller-6968d8fdc4-t7z6x\" (UID: \"09785fed-de18-4a9b-b32f-8a3644ede917\") " pod="metallb-system/controller-6968d8fdc4-t7z6x" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.281723 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-memberlist\") pod \"speaker-dqhvw\" (UID: \"263f4c60-783f-4109-bcf6-cbdd5e03ec0e\") " pod="metallb-system/speaker-dqhvw" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.281747 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lrxb\" (UniqueName: \"kubernetes.io/projected/fa78a5ba-04ae-4ff3-85f1-6c95530e3ff2-kube-api-access-4lrxb\") pod \"frr-k8s-webhook-server-7df86c4f6c-5fpmg\" (UID: \"fa78a5ba-04ae-4ff3-85f1-6c95530e3ff2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5fpmg" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.281771 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-reloader\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.281794 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2hpn\" (UniqueName: \"kubernetes.io/projected/09785fed-de18-4a9b-b32f-8a3644ede917-kube-api-access-s2hpn\") pod \"controller-6968d8fdc4-t7z6x\" (UID: \"09785fed-de18-4a9b-b32f-8a3644ede917\") " pod="metallb-system/controller-6968d8fdc4-t7z6x" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.281811 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-metallb-excludel2\") pod \"speaker-dqhvw\" (UID: \"263f4c60-783f-4109-bcf6-cbdd5e03ec0e\") " pod="metallb-system/speaker-dqhvw" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.281826 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-metrics\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.281851 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqbbc\" (UniqueName: \"kubernetes.io/projected/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-kube-api-access-lqbbc\") pod \"speaker-dqhvw\" (UID: \"263f4c60-783f-4109-bcf6-cbdd5e03ec0e\") " pod="metallb-system/speaker-dqhvw" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.281866 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-metrics-certs\") pod \"speaker-dqhvw\" (UID: \"263f4c60-783f-4109-bcf6-cbdd5e03ec0e\") " pod="metallb-system/speaker-dqhvw" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.281882 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-frr-startup\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.281904 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/09785fed-de18-4a9b-b32f-8a3644ede917-metrics-certs\") pod \"controller-6968d8fdc4-t7z6x\" (UID: \"09785fed-de18-4a9b-b32f-8a3644ede917\") " pod="metallb-system/controller-6968d8fdc4-t7z6x" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.281926 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8726l\" (UniqueName: \"kubernetes.io/projected/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-kube-api-access-8726l\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.282881 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-frr-sockets\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.283103 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-frr-conf\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.284062 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-frr-startup\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.284276 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-metrics\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: E0202 11:11:23.284476 4925 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 11:11:23 crc kubenswrapper[4925]: E0202 11:11:23.284512 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-memberlist podName:263f4c60-783f-4109-bcf6-cbdd5e03ec0e nodeName:}" failed. No retries permitted until 2026-02-02 11:11:23.784500179 +0000 UTC m=+860.788749141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-memberlist") pod "speaker-dqhvw" (UID: "263f4c60-783f-4109-bcf6-cbdd5e03ec0e") : secret "metallb-memberlist" not found Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.284890 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-reloader\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.286519 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-metallb-excludel2\") pod \"speaker-dqhvw\" (UID: \"263f4c60-783f-4109-bcf6-cbdd5e03ec0e\") " pod="metallb-system/speaker-dqhvw" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.290419 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-metrics-certs\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.292181 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-metrics-certs\") pod \"speaker-dqhvw\" (UID: \"263f4c60-783f-4109-bcf6-cbdd5e03ec0e\") " pod="metallb-system/speaker-dqhvw" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.307298 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8726l\" (UniqueName: \"kubernetes.io/projected/04f8da8f-7d17-4f0d-9fb2-5a66470d62dd-kube-api-access-8726l\") pod \"frr-k8s-qx9hq\" (UID: \"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd\") " pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.308877 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa78a5ba-04ae-4ff3-85f1-6c95530e3ff2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-5fpmg\" (UID: \"fa78a5ba-04ae-4ff3-85f1-6c95530e3ff2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5fpmg" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.309519 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lrxb\" (UniqueName: \"kubernetes.io/projected/fa78a5ba-04ae-4ff3-85f1-6c95530e3ff2-kube-api-access-4lrxb\") pod \"frr-k8s-webhook-server-7df86c4f6c-5fpmg\" (UID: \"fa78a5ba-04ae-4ff3-85f1-6c95530e3ff2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5fpmg" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.321272 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqbbc\" (UniqueName: \"kubernetes.io/projected/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-kube-api-access-lqbbc\") pod \"speaker-dqhvw\" (UID: \"263f4c60-783f-4109-bcf6-cbdd5e03ec0e\") " pod="metallb-system/speaker-dqhvw" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.382745 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09785fed-de18-4a9b-b32f-8a3644ede917-cert\") pod \"controller-6968d8fdc4-t7z6x\" (UID: \"09785fed-de18-4a9b-b32f-8a3644ede917\") " pod="metallb-system/controller-6968d8fdc4-t7z6x" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.382840 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2hpn\" (UniqueName: \"kubernetes.io/projected/09785fed-de18-4a9b-b32f-8a3644ede917-kube-api-access-s2hpn\") pod \"controller-6968d8fdc4-t7z6x\" (UID: \"09785fed-de18-4a9b-b32f-8a3644ede917\") " pod="metallb-system/controller-6968d8fdc4-t7z6x" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.382893 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/09785fed-de18-4a9b-b32f-8a3644ede917-metrics-certs\") pod \"controller-6968d8fdc4-t7z6x\" (UID: \"09785fed-de18-4a9b-b32f-8a3644ede917\") " pod="metallb-system/controller-6968d8fdc4-t7z6x" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.386148 4925 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.386678 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/09785fed-de18-4a9b-b32f-8a3644ede917-metrics-certs\") pod \"controller-6968d8fdc4-t7z6x\" (UID: \"09785fed-de18-4a9b-b32f-8a3644ede917\") " pod="metallb-system/controller-6968d8fdc4-t7z6x" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.395640 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.398588 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09785fed-de18-4a9b-b32f-8a3644ede917-cert\") pod \"controller-6968d8fdc4-t7z6x\" (UID: \"09785fed-de18-4a9b-b32f-8a3644ede917\") " pod="metallb-system/controller-6968d8fdc4-t7z6x" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.402913 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5fpmg" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.431816 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2hpn\" (UniqueName: \"kubernetes.io/projected/09785fed-de18-4a9b-b32f-8a3644ede917-kube-api-access-s2hpn\") pod \"controller-6968d8fdc4-t7z6x\" (UID: \"09785fed-de18-4a9b-b32f-8a3644ede917\") " pod="metallb-system/controller-6968d8fdc4-t7z6x" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.499269 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-t7z6x" Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.815917 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-memberlist\") pod \"speaker-dqhvw\" (UID: \"263f4c60-783f-4109-bcf6-cbdd5e03ec0e\") " pod="metallb-system/speaker-dqhvw" Feb 02 11:11:23 crc kubenswrapper[4925]: E0202 11:11:23.816370 4925 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 11:11:23 crc kubenswrapper[4925]: E0202 11:11:23.816423 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-memberlist podName:263f4c60-783f-4109-bcf6-cbdd5e03ec0e nodeName:}" failed. No retries permitted until 2026-02-02 11:11:24.816407916 +0000 UTC m=+861.820656878 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-memberlist") pod "speaker-dqhvw" (UID: "263f4c60-783f-4109-bcf6-cbdd5e03ec0e") : secret "metallb-memberlist" not found Feb 02 11:11:23 crc kubenswrapper[4925]: I0202 11:11:23.947111 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-5fpmg"] Feb 02 11:11:24 crc kubenswrapper[4925]: I0202 11:11:24.006717 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-t7z6x"] Feb 02 11:11:24 crc kubenswrapper[4925]: I0202 11:11:24.481333 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qx9hq" event={"ID":"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd","Type":"ContainerStarted","Data":"2b9ee4df73da7389eeee6414810cf14c06ef2384dad4346b249783b42171adee"} Feb 02 11:11:24 crc kubenswrapper[4925]: I0202 11:11:24.484217 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-t7z6x" event={"ID":"09785fed-de18-4a9b-b32f-8a3644ede917","Type":"ContainerStarted","Data":"23e166fdb88fc47af8224bff2c1a9fb65f3df08f7d6c57e4e612d714f9e491ca"} Feb 02 11:11:24 crc kubenswrapper[4925]: I0202 11:11:24.484250 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-t7z6x" event={"ID":"09785fed-de18-4a9b-b32f-8a3644ede917","Type":"ContainerStarted","Data":"031468637cc8e90f5cdfad8d6945757ffe6e1d8c78fefff657d7f5173cddd459"} Feb 02 11:11:24 crc kubenswrapper[4925]: I0202 11:11:24.484260 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-t7z6x" event={"ID":"09785fed-de18-4a9b-b32f-8a3644ede917","Type":"ContainerStarted","Data":"d80ca5927de0e1b52bdd779a9b00d8843d54407e2d74ad9d719198700805fc49"} Feb 02 11:11:24 crc kubenswrapper[4925]: I0202 11:11:24.484368 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-t7z6x" Feb 02 11:11:24 crc kubenswrapper[4925]: I0202 11:11:24.485982 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5fpmg" event={"ID":"fa78a5ba-04ae-4ff3-85f1-6c95530e3ff2","Type":"ContainerStarted","Data":"6b9ca95ecd50d4ad5c35c657063bebda10c8249e0e833d743deb1cb959f63ad6"} Feb 02 11:11:24 crc kubenswrapper[4925]: I0202 11:11:24.681234 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-t7z6x" podStartSLOduration=1.681214983 podStartE2EDuration="1.681214983s" podCreationTimestamp="2026-02-02 11:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:11:24.503647817 +0000 UTC m=+861.507896789" watchObservedRunningTime="2026-02-02 11:11:24.681214983 +0000 UTC m=+861.685463945" Feb 02 11:11:24 crc kubenswrapper[4925]: I0202 11:11:24.828322 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-memberlist\") pod \"speaker-dqhvw\" (UID: \"263f4c60-783f-4109-bcf6-cbdd5e03ec0e\") " pod="metallb-system/speaker-dqhvw" Feb 02 11:11:24 crc kubenswrapper[4925]: I0202 11:11:24.838823 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/263f4c60-783f-4109-bcf6-cbdd5e03ec0e-memberlist\") pod \"speaker-dqhvw\" (UID: \"263f4c60-783f-4109-bcf6-cbdd5e03ec0e\") " pod="metallb-system/speaker-dqhvw" Feb 02 11:11:24 crc kubenswrapper[4925]: I0202 11:11:24.985976 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-dqhvw" Feb 02 11:11:25 crc kubenswrapper[4925]: W0202 11:11:25.005748 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod263f4c60_783f_4109_bcf6_cbdd5e03ec0e.slice/crio-0c02803b89e24be2d528ed5b8544c6730470eacc24134ec6976498981daad2de WatchSource:0}: Error finding container 0c02803b89e24be2d528ed5b8544c6730470eacc24134ec6976498981daad2de: Status 404 returned error can't find the container with id 0c02803b89e24be2d528ed5b8544c6730470eacc24134ec6976498981daad2de Feb 02 11:11:25 crc kubenswrapper[4925]: I0202 11:11:25.501376 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dqhvw" event={"ID":"263f4c60-783f-4109-bcf6-cbdd5e03ec0e","Type":"ContainerStarted","Data":"eca37b5819cc1649960f02651e1e62954318af97331cd679cb10a7eade08eb92"} Feb 02 11:11:25 crc kubenswrapper[4925]: I0202 11:11:25.501444 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dqhvw" event={"ID":"263f4c60-783f-4109-bcf6-cbdd5e03ec0e","Type":"ContainerStarted","Data":"0c02803b89e24be2d528ed5b8544c6730470eacc24134ec6976498981daad2de"} Feb 02 11:11:26 crc kubenswrapper[4925]: I0202 11:11:26.513483 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dqhvw" event={"ID":"263f4c60-783f-4109-bcf6-cbdd5e03ec0e","Type":"ContainerStarted","Data":"bcb2398b5e35afed3a76d31860a017c4e13dac4191294433caee3ca04a8907b0"} Feb 02 11:11:26 crc kubenswrapper[4925]: I0202 11:11:26.513738 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-dqhvw" Feb 02 11:11:26 crc kubenswrapper[4925]: I0202 11:11:26.536117 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-dqhvw" podStartSLOduration=3.5360826210000003 podStartE2EDuration="3.536082621s" podCreationTimestamp="2026-02-02 11:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:11:26.531819588 +0000 UTC m=+863.536068550" watchObservedRunningTime="2026-02-02 11:11:26.536082621 +0000 UTC m=+863.540331583" Feb 02 11:11:34 crc kubenswrapper[4925]: I0202 11:11:34.573045 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5fpmg" event={"ID":"fa78a5ba-04ae-4ff3-85f1-6c95530e3ff2","Type":"ContainerStarted","Data":"aa51befc1ab21b9516f250db8a711ef1371175fcd9bb1df5b7320f41ba8777d1"} Feb 02 11:11:34 crc kubenswrapper[4925]: I0202 11:11:34.573676 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5fpmg" Feb 02 11:11:34 crc kubenswrapper[4925]: I0202 11:11:34.575941 4925 generic.go:334] "Generic (PLEG): container finished" podID="04f8da8f-7d17-4f0d-9fb2-5a66470d62dd" containerID="ed4ad757044006543bdf2e6810f1d11cbbd8996658e2200c76e5c40baa0f4c5a" exitCode=0 Feb 02 11:11:34 crc kubenswrapper[4925]: I0202 11:11:34.575998 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qx9hq" event={"ID":"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd","Type":"ContainerDied","Data":"ed4ad757044006543bdf2e6810f1d11cbbd8996658e2200c76e5c40baa0f4c5a"} Feb 02 11:11:34 crc kubenswrapper[4925]: I0202 11:11:34.593751 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5fpmg" podStartSLOduration=1.517421343 podStartE2EDuration="11.593720729s" podCreationTimestamp="2026-02-02 11:11:23 +0000 UTC" firstStartedPulling="2026-02-02 11:11:23.950767882 +0000 UTC m=+860.955016844" lastFinishedPulling="2026-02-02 11:11:34.027067278 +0000 UTC m=+871.031316230" observedRunningTime="2026-02-02 11:11:34.592298792 +0000 UTC m=+871.596547754" watchObservedRunningTime="2026-02-02 11:11:34.593720729 +0000 UTC m=+871.597969701" Feb 02 11:11:35 crc kubenswrapper[4925]: I0202 11:11:35.585126 4925 generic.go:334] "Generic (PLEG): container finished" podID="04f8da8f-7d17-4f0d-9fb2-5a66470d62dd" containerID="1bcbc49fb9a516498eed89916d8ba6db25337fa37783a91825fb3335893da8f3" exitCode=0 Feb 02 11:11:35 crc kubenswrapper[4925]: I0202 11:11:35.585230 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qx9hq" event={"ID":"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd","Type":"ContainerDied","Data":"1bcbc49fb9a516498eed89916d8ba6db25337fa37783a91825fb3335893da8f3"} Feb 02 11:11:36 crc kubenswrapper[4925]: I0202 11:11:36.593153 4925 generic.go:334] "Generic (PLEG): container finished" podID="04f8da8f-7d17-4f0d-9fb2-5a66470d62dd" containerID="dfdd18656b8df0ea1b5fb2350c51e9fc3279a0e5a56ad5ae93a808320fc1928b" exitCode=0 Feb 02 11:11:36 crc kubenswrapper[4925]: I0202 11:11:36.593189 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qx9hq" event={"ID":"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd","Type":"ContainerDied","Data":"dfdd18656b8df0ea1b5fb2350c51e9fc3279a0e5a56ad5ae93a808320fc1928b"} Feb 02 11:11:37 crc kubenswrapper[4925]: I0202 11:11:37.602282 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qx9hq" event={"ID":"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd","Type":"ContainerStarted","Data":"2c46c5160adb37386c90f4ebba45880db684e4627a2f27bfefab1af37e180830"} Feb 02 11:11:37 crc kubenswrapper[4925]: I0202 11:11:37.602568 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:37 crc kubenswrapper[4925]: I0202 11:11:37.602579 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qx9hq" event={"ID":"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd","Type":"ContainerStarted","Data":"5e8be3b993bca0e7ff03011d3cd518276b85af1918f46b1ec7a092f4a81c830f"} Feb 02 11:11:37 crc kubenswrapper[4925]: I0202 11:11:37.602589 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qx9hq" event={"ID":"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd","Type":"ContainerStarted","Data":"6bd80c4d9c708cd9df7269f1b1ea33cd170bb08ceaef60733354655ae1da81d0"} Feb 02 11:11:37 crc kubenswrapper[4925]: I0202 11:11:37.602598 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qx9hq" event={"ID":"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd","Type":"ContainerStarted","Data":"5c358d42c564590150e6e2a1314da74defc5433f8f8fe2731c81c0c8f55ac32a"} Feb 02 11:11:37 crc kubenswrapper[4925]: I0202 11:11:37.602605 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qx9hq" event={"ID":"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd","Type":"ContainerStarted","Data":"a47f1d57fb95137d78eddecaeaca3408d7f89c054cc06c9c3ec9646610ac6246"} Feb 02 11:11:37 crc kubenswrapper[4925]: I0202 11:11:37.602613 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qx9hq" event={"ID":"04f8da8f-7d17-4f0d-9fb2-5a66470d62dd","Type":"ContainerStarted","Data":"d4d133e8f5a89a0b988dc4eb247f77bb6c35a7b253f7dab31e3c745761a0ca9d"} Feb 02 11:11:37 crc kubenswrapper[4925]: I0202 11:11:37.630541 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-qx9hq" podStartSLOduration=4.256337432 podStartE2EDuration="14.630523946s" podCreationTimestamp="2026-02-02 11:11:23 +0000 UTC" firstStartedPulling="2026-02-02 11:11:23.675475296 +0000 UTC m=+860.679724268" lastFinishedPulling="2026-02-02 11:11:34.04966182 +0000 UTC m=+871.053910782" observedRunningTime="2026-02-02 11:11:37.627063494 +0000 UTC m=+874.631312496" watchObservedRunningTime="2026-02-02 11:11:37.630523946 +0000 UTC m=+874.634772908" Feb 02 11:11:38 crc kubenswrapper[4925]: I0202 11:11:38.396480 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:38 crc kubenswrapper[4925]: I0202 11:11:38.440323 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:43 crc kubenswrapper[4925]: I0202 11:11:43.503252 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-t7z6x" Feb 02 11:11:44 crc kubenswrapper[4925]: I0202 11:11:44.989982 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-dqhvw" Feb 02 11:11:47 crc kubenswrapper[4925]: I0202 11:11:47.712144 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-d7lhl"] Feb 02 11:11:47 crc kubenswrapper[4925]: I0202 11:11:47.713471 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d7lhl" Feb 02 11:11:47 crc kubenswrapper[4925]: I0202 11:11:47.716243 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 02 11:11:47 crc kubenswrapper[4925]: I0202 11:11:47.716599 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 02 11:11:47 crc kubenswrapper[4925]: I0202 11:11:47.718760 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-fv52w" Feb 02 11:11:47 crc kubenswrapper[4925]: I0202 11:11:47.725551 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d7lhl"] Feb 02 11:11:47 crc kubenswrapper[4925]: I0202 11:11:47.815959 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl8c5\" (UniqueName: \"kubernetes.io/projected/21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666-kube-api-access-jl8c5\") pod \"openstack-operator-index-d7lhl\" (UID: \"21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666\") " pod="openstack-operators/openstack-operator-index-d7lhl" Feb 02 11:11:47 crc kubenswrapper[4925]: I0202 11:11:47.917177 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl8c5\" (UniqueName: \"kubernetes.io/projected/21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666-kube-api-access-jl8c5\") pod \"openstack-operator-index-d7lhl\" (UID: \"21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666\") " pod="openstack-operators/openstack-operator-index-d7lhl" Feb 02 11:11:47 crc kubenswrapper[4925]: I0202 11:11:47.939698 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl8c5\" (UniqueName: \"kubernetes.io/projected/21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666-kube-api-access-jl8c5\") pod \"openstack-operator-index-d7lhl\" (UID: \"21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666\") " pod="openstack-operators/openstack-operator-index-d7lhl" Feb 02 11:11:48 crc kubenswrapper[4925]: I0202 11:11:48.033348 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d7lhl" Feb 02 11:11:48 crc kubenswrapper[4925]: I0202 11:11:48.309671 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d7lhl"] Feb 02 11:11:48 crc kubenswrapper[4925]: I0202 11:11:48.673359 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d7lhl" event={"ID":"21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666","Type":"ContainerStarted","Data":"19efb7dbf26a981b1d264e6efc33cabb1e79a11b357ee7b9f5cd1f08f638e85d"} Feb 02 11:11:51 crc kubenswrapper[4925]: I0202 11:11:51.091325 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-d7lhl"] Feb 02 11:11:51 crc kubenswrapper[4925]: I0202 11:11:51.686181 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d7lhl" event={"ID":"21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666","Type":"ContainerStarted","Data":"7b8a5970be6297de3401afcf7514e69a97ee85f0441d10d1466e368c530b1f0b"} Feb 02 11:11:51 crc kubenswrapper[4925]: I0202 11:11:51.694977 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-grspq"] Feb 02 11:11:51 crc kubenswrapper[4925]: I0202 11:11:51.696156 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-grspq" Feb 02 11:11:51 crc kubenswrapper[4925]: I0202 11:11:51.706323 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-grspq"] Feb 02 11:11:51 crc kubenswrapper[4925]: I0202 11:11:51.708731 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-d7lhl" podStartSLOduration=2.394489759 podStartE2EDuration="4.708712333s" podCreationTimestamp="2026-02-02 11:11:47 +0000 UTC" firstStartedPulling="2026-02-02 11:11:48.32674942 +0000 UTC m=+885.330998382" lastFinishedPulling="2026-02-02 11:11:50.640971994 +0000 UTC m=+887.645220956" observedRunningTime="2026-02-02 11:11:51.706497774 +0000 UTC m=+888.710746746" watchObservedRunningTime="2026-02-02 11:11:51.708712333 +0000 UTC m=+888.712961295" Feb 02 11:11:51 crc kubenswrapper[4925]: I0202 11:11:51.784357 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twjcp\" (UniqueName: \"kubernetes.io/projected/f66c6d9e-dc09-4ffc-af2b-672b8406c132-kube-api-access-twjcp\") pod \"openstack-operator-index-grspq\" (UID: \"f66c6d9e-dc09-4ffc-af2b-672b8406c132\") " pod="openstack-operators/openstack-operator-index-grspq" Feb 02 11:11:51 crc kubenswrapper[4925]: I0202 11:11:51.885597 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twjcp\" (UniqueName: \"kubernetes.io/projected/f66c6d9e-dc09-4ffc-af2b-672b8406c132-kube-api-access-twjcp\") pod \"openstack-operator-index-grspq\" (UID: \"f66c6d9e-dc09-4ffc-af2b-672b8406c132\") " pod="openstack-operators/openstack-operator-index-grspq" Feb 02 11:11:51 crc kubenswrapper[4925]: I0202 11:11:51.905893 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twjcp\" (UniqueName: \"kubernetes.io/projected/f66c6d9e-dc09-4ffc-af2b-672b8406c132-kube-api-access-twjcp\") pod \"openstack-operator-index-grspq\" (UID: \"f66c6d9e-dc09-4ffc-af2b-672b8406c132\") " pod="openstack-operators/openstack-operator-index-grspq" Feb 02 11:11:52 crc kubenswrapper[4925]: I0202 11:11:52.020138 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-grspq" Feb 02 11:11:52 crc kubenswrapper[4925]: I0202 11:11:52.440513 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-grspq"] Feb 02 11:11:52 crc kubenswrapper[4925]: W0202 11:11:52.444203 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf66c6d9e_dc09_4ffc_af2b_672b8406c132.slice/crio-c6df54b0e9d60a72b148feefbe2f8ea32bb3d8dfa4c70a67bf455ac35d5af8a8 WatchSource:0}: Error finding container c6df54b0e9d60a72b148feefbe2f8ea32bb3d8dfa4c70a67bf455ac35d5af8a8: Status 404 returned error can't find the container with id c6df54b0e9d60a72b148feefbe2f8ea32bb3d8dfa4c70a67bf455ac35d5af8a8 Feb 02 11:11:52 crc kubenswrapper[4925]: I0202 11:11:52.693707 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-grspq" event={"ID":"f66c6d9e-dc09-4ffc-af2b-672b8406c132","Type":"ContainerStarted","Data":"c6df54b0e9d60a72b148feefbe2f8ea32bb3d8dfa4c70a67bf455ac35d5af8a8"} Feb 02 11:11:52 crc kubenswrapper[4925]: I0202 11:11:52.693813 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-d7lhl" podUID="21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666" containerName="registry-server" containerID="cri-o://7b8a5970be6297de3401afcf7514e69a97ee85f0441d10d1466e368c530b1f0b" gracePeriod=2 Feb 02 11:11:53 crc kubenswrapper[4925]: I0202 11:11:53.055753 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d7lhl" Feb 02 11:11:53 crc kubenswrapper[4925]: I0202 11:11:53.207824 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl8c5\" (UniqueName: \"kubernetes.io/projected/21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666-kube-api-access-jl8c5\") pod \"21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666\" (UID: \"21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666\") " Feb 02 11:11:53 crc kubenswrapper[4925]: I0202 11:11:53.213455 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666-kube-api-access-jl8c5" (OuterVolumeSpecName: "kube-api-access-jl8c5") pod "21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666" (UID: "21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666"). InnerVolumeSpecName "kube-api-access-jl8c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:11:53 crc kubenswrapper[4925]: I0202 11:11:53.309877 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl8c5\" (UniqueName: \"kubernetes.io/projected/21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666-kube-api-access-jl8c5\") on node \"crc\" DevicePath \"\"" Feb 02 11:11:53 crc kubenswrapper[4925]: I0202 11:11:53.398804 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-qx9hq" Feb 02 11:11:53 crc kubenswrapper[4925]: I0202 11:11:53.409766 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-5fpmg" Feb 02 11:11:53 crc kubenswrapper[4925]: I0202 11:11:53.702324 4925 generic.go:334] "Generic (PLEG): container finished" podID="21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666" containerID="7b8a5970be6297de3401afcf7514e69a97ee85f0441d10d1466e368c530b1f0b" exitCode=0 Feb 02 11:11:53 crc kubenswrapper[4925]: I0202 11:11:53.702398 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d7lhl" Feb 02 11:11:53 crc kubenswrapper[4925]: I0202 11:11:53.702386 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d7lhl" event={"ID":"21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666","Type":"ContainerDied","Data":"7b8a5970be6297de3401afcf7514e69a97ee85f0441d10d1466e368c530b1f0b"} Feb 02 11:11:53 crc kubenswrapper[4925]: I0202 11:11:53.703361 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d7lhl" event={"ID":"21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666","Type":"ContainerDied","Data":"19efb7dbf26a981b1d264e6efc33cabb1e79a11b357ee7b9f5cd1f08f638e85d"} Feb 02 11:11:53 crc kubenswrapper[4925]: I0202 11:11:53.703397 4925 scope.go:117] "RemoveContainer" containerID="7b8a5970be6297de3401afcf7514e69a97ee85f0441d10d1466e368c530b1f0b" Feb 02 11:11:53 crc kubenswrapper[4925]: I0202 11:11:53.705507 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-grspq" event={"ID":"f66c6d9e-dc09-4ffc-af2b-672b8406c132","Type":"ContainerStarted","Data":"5e561415ebfe5f78ddca8808449a354105a5f04bb12ba303b9e394d55d345dc2"} Feb 02 11:11:53 crc kubenswrapper[4925]: I0202 11:11:53.731191 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-grspq" podStartSLOduration=2.626907186 podStartE2EDuration="2.731175771s" podCreationTimestamp="2026-02-02 11:11:51 +0000 UTC" firstStartedPulling="2026-02-02 11:11:52.4484215 +0000 UTC m=+889.452670462" lastFinishedPulling="2026-02-02 11:11:52.552690085 +0000 UTC m=+889.556939047" observedRunningTime="2026-02-02 11:11:53.727277707 +0000 UTC m=+890.731526689" watchObservedRunningTime="2026-02-02 11:11:53.731175771 +0000 UTC m=+890.735424723" Feb 02 11:11:53 crc kubenswrapper[4925]: I0202 11:11:53.732669 4925 scope.go:117] "RemoveContainer" containerID="7b8a5970be6297de3401afcf7514e69a97ee85f0441d10d1466e368c530b1f0b" Feb 02 11:11:53 crc kubenswrapper[4925]: E0202 11:11:53.734801 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b8a5970be6297de3401afcf7514e69a97ee85f0441d10d1466e368c530b1f0b\": container with ID starting with 7b8a5970be6297de3401afcf7514e69a97ee85f0441d10d1466e368c530b1f0b not found: ID does not exist" containerID="7b8a5970be6297de3401afcf7514e69a97ee85f0441d10d1466e368c530b1f0b" Feb 02 11:11:53 crc kubenswrapper[4925]: I0202 11:11:53.734831 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b8a5970be6297de3401afcf7514e69a97ee85f0441d10d1466e368c530b1f0b"} err="failed to get container status \"7b8a5970be6297de3401afcf7514e69a97ee85f0441d10d1466e368c530b1f0b\": rpc error: code = NotFound desc = could not find container \"7b8a5970be6297de3401afcf7514e69a97ee85f0441d10d1466e368c530b1f0b\": container with ID starting with 7b8a5970be6297de3401afcf7514e69a97ee85f0441d10d1466e368c530b1f0b not found: ID does not exist" Feb 02 11:11:53 crc kubenswrapper[4925]: I0202 11:11:53.747211 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-d7lhl"] Feb 02 11:11:53 crc kubenswrapper[4925]: I0202 11:11:53.750742 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-d7lhl"] Feb 02 11:11:54 crc kubenswrapper[4925]: I0202 11:11:54.671736 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666" path="/var/lib/kubelet/pods/21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666/volumes" Feb 02 11:12:02 crc kubenswrapper[4925]: I0202 11:12:02.020337 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-grspq" Feb 02 11:12:02 crc kubenswrapper[4925]: I0202 11:12:02.020789 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-grspq" Feb 02 11:12:02 crc kubenswrapper[4925]: I0202 11:12:02.054708 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-grspq" Feb 02 11:12:02 crc kubenswrapper[4925]: I0202 11:12:02.797608 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-grspq" Feb 02 11:12:10 crc kubenswrapper[4925]: I0202 11:12:10.559880 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr"] Feb 02 11:12:10 crc kubenswrapper[4925]: E0202 11:12:10.561557 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666" containerName="registry-server" Feb 02 11:12:10 crc kubenswrapper[4925]: I0202 11:12:10.561670 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666" containerName="registry-server" Feb 02 11:12:10 crc kubenswrapper[4925]: I0202 11:12:10.561878 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="21fa3c8c-0632-46c0-8a2c-a7d1a6bbf666" containerName="registry-server" Feb 02 11:12:10 crc kubenswrapper[4925]: I0202 11:12:10.562923 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" Feb 02 11:12:10 crc kubenswrapper[4925]: I0202 11:12:10.565128 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-2bp82" Feb 02 11:12:10 crc kubenswrapper[4925]: I0202 11:12:10.573417 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr"] Feb 02 11:12:10 crc kubenswrapper[4925]: I0202 11:12:10.728741 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-util\") pod \"ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr\" (UID: \"d2b04846-e7f8-4fe5-9878-8cc586d96b5a\") " pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" Feb 02 11:12:10 crc kubenswrapper[4925]: I0202 11:12:10.728794 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqzjb\" (UniqueName: \"kubernetes.io/projected/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-kube-api-access-sqzjb\") pod \"ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr\" (UID: \"d2b04846-e7f8-4fe5-9878-8cc586d96b5a\") " pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" Feb 02 11:12:10 crc kubenswrapper[4925]: I0202 11:12:10.729067 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-bundle\") pod \"ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr\" (UID: \"d2b04846-e7f8-4fe5-9878-8cc586d96b5a\") " pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" Feb 02 11:12:10 crc kubenswrapper[4925]: I0202 11:12:10.829984 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-util\") pod \"ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr\" (UID: \"d2b04846-e7f8-4fe5-9878-8cc586d96b5a\") " pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" Feb 02 11:12:10 crc kubenswrapper[4925]: I0202 11:12:10.830356 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqzjb\" (UniqueName: \"kubernetes.io/projected/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-kube-api-access-sqzjb\") pod \"ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr\" (UID: \"d2b04846-e7f8-4fe5-9878-8cc586d96b5a\") " pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" Feb 02 11:12:10 crc kubenswrapper[4925]: I0202 11:12:10.830515 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-util\") pod \"ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr\" (UID: \"d2b04846-e7f8-4fe5-9878-8cc586d96b5a\") " pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" Feb 02 11:12:10 crc kubenswrapper[4925]: I0202 11:12:10.830912 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-bundle\") pod \"ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr\" (UID: \"d2b04846-e7f8-4fe5-9878-8cc586d96b5a\") " pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" Feb 02 11:12:10 crc kubenswrapper[4925]: I0202 11:12:10.831239 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-bundle\") pod \"ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr\" (UID: \"d2b04846-e7f8-4fe5-9878-8cc586d96b5a\") " pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" Feb 02 11:12:10 crc kubenswrapper[4925]: I0202 11:12:10.851809 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqzjb\" (UniqueName: \"kubernetes.io/projected/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-kube-api-access-sqzjb\") pod \"ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr\" (UID: \"d2b04846-e7f8-4fe5-9878-8cc586d96b5a\") " pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" Feb 02 11:12:10 crc kubenswrapper[4925]: I0202 11:12:10.877894 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" Feb 02 11:12:11 crc kubenswrapper[4925]: I0202 11:12:11.320217 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr"] Feb 02 11:12:11 crc kubenswrapper[4925]: W0202 11:12:11.330309 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2b04846_e7f8_4fe5_9878_8cc586d96b5a.slice/crio-aa7b9268c055f49a90190821e325ea56aabe4d1e95a5720beaec3200554e6d7c WatchSource:0}: Error finding container aa7b9268c055f49a90190821e325ea56aabe4d1e95a5720beaec3200554e6d7c: Status 404 returned error can't find the container with id aa7b9268c055f49a90190821e325ea56aabe4d1e95a5720beaec3200554e6d7c Feb 02 11:12:11 crc kubenswrapper[4925]: I0202 11:12:11.821706 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" event={"ID":"d2b04846-e7f8-4fe5-9878-8cc586d96b5a","Type":"ContainerStarted","Data":"aa7b9268c055f49a90190821e325ea56aabe4d1e95a5720beaec3200554e6d7c"} Feb 02 11:12:12 crc kubenswrapper[4925]: I0202 11:12:12.829692 4925 generic.go:334] "Generic (PLEG): container finished" podID="d2b04846-e7f8-4fe5-9878-8cc586d96b5a" containerID="7fe1bb6bc833b853c2a4897baa4920bce9ac2887ad4ce8458fe4f663bf5388d8" exitCode=0 Feb 02 11:12:12 crc kubenswrapper[4925]: I0202 11:12:12.829733 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" event={"ID":"d2b04846-e7f8-4fe5-9878-8cc586d96b5a","Type":"ContainerDied","Data":"7fe1bb6bc833b853c2a4897baa4920bce9ac2887ad4ce8458fe4f663bf5388d8"} Feb 02 11:12:13 crc kubenswrapper[4925]: I0202 11:12:13.399321 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:12:13 crc kubenswrapper[4925]: I0202 11:12:13.399421 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:12:18 crc kubenswrapper[4925]: I0202 11:12:18.263543 4925 generic.go:334] "Generic (PLEG): container finished" podID="d2b04846-e7f8-4fe5-9878-8cc586d96b5a" containerID="45167bf2c4899bfe9facfe8c151e427a17a919621c7f69d58af5c3db2b742475" exitCode=0 Feb 02 11:12:18 crc kubenswrapper[4925]: I0202 11:12:18.263615 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" event={"ID":"d2b04846-e7f8-4fe5-9878-8cc586d96b5a","Type":"ContainerDied","Data":"45167bf2c4899bfe9facfe8c151e427a17a919621c7f69d58af5c3db2b742475"} Feb 02 11:12:19 crc kubenswrapper[4925]: I0202 11:12:19.275418 4925 generic.go:334] "Generic (PLEG): container finished" podID="d2b04846-e7f8-4fe5-9878-8cc586d96b5a" containerID="921776a8733643a05da1e17673d159e06905a85ca33ae29696d6516690e55afe" exitCode=0 Feb 02 11:12:19 crc kubenswrapper[4925]: I0202 11:12:19.275515 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" event={"ID":"d2b04846-e7f8-4fe5-9878-8cc586d96b5a","Type":"ContainerDied","Data":"921776a8733643a05da1e17673d159e06905a85ca33ae29696d6516690e55afe"} Feb 02 11:12:20 crc kubenswrapper[4925]: I0202 11:12:20.506702 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" Feb 02 11:12:20 crc kubenswrapper[4925]: I0202 11:12:20.659093 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-util\") pod \"d2b04846-e7f8-4fe5-9878-8cc586d96b5a\" (UID: \"d2b04846-e7f8-4fe5-9878-8cc586d96b5a\") " Feb 02 11:12:20 crc kubenswrapper[4925]: I0202 11:12:20.659186 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqzjb\" (UniqueName: \"kubernetes.io/projected/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-kube-api-access-sqzjb\") pod \"d2b04846-e7f8-4fe5-9878-8cc586d96b5a\" (UID: \"d2b04846-e7f8-4fe5-9878-8cc586d96b5a\") " Feb 02 11:12:20 crc kubenswrapper[4925]: I0202 11:12:20.659302 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-bundle\") pod \"d2b04846-e7f8-4fe5-9878-8cc586d96b5a\" (UID: \"d2b04846-e7f8-4fe5-9878-8cc586d96b5a\") " Feb 02 11:12:20 crc kubenswrapper[4925]: I0202 11:12:20.660131 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-bundle" (OuterVolumeSpecName: "bundle") pod "d2b04846-e7f8-4fe5-9878-8cc586d96b5a" (UID: "d2b04846-e7f8-4fe5-9878-8cc586d96b5a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:12:20 crc kubenswrapper[4925]: I0202 11:12:20.665095 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-kube-api-access-sqzjb" (OuterVolumeSpecName: "kube-api-access-sqzjb") pod "d2b04846-e7f8-4fe5-9878-8cc586d96b5a" (UID: "d2b04846-e7f8-4fe5-9878-8cc586d96b5a"). InnerVolumeSpecName "kube-api-access-sqzjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:12:20 crc kubenswrapper[4925]: I0202 11:12:20.675571 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-util" (OuterVolumeSpecName: "util") pod "d2b04846-e7f8-4fe5-9878-8cc586d96b5a" (UID: "d2b04846-e7f8-4fe5-9878-8cc586d96b5a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:12:20 crc kubenswrapper[4925]: I0202 11:12:20.760470 4925 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-util\") on node \"crc\" DevicePath \"\"" Feb 02 11:12:20 crc kubenswrapper[4925]: I0202 11:12:20.760502 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqzjb\" (UniqueName: \"kubernetes.io/projected/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-kube-api-access-sqzjb\") on node \"crc\" DevicePath \"\"" Feb 02 11:12:20 crc kubenswrapper[4925]: I0202 11:12:20.760513 4925 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2b04846-e7f8-4fe5-9878-8cc586d96b5a-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:12:21 crc kubenswrapper[4925]: I0202 11:12:21.290549 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" event={"ID":"d2b04846-e7f8-4fe5-9878-8cc586d96b5a","Type":"ContainerDied","Data":"aa7b9268c055f49a90190821e325ea56aabe4d1e95a5720beaec3200554e6d7c"} Feb 02 11:12:21 crc kubenswrapper[4925]: I0202 11:12:21.290594 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr" Feb 02 11:12:21 crc kubenswrapper[4925]: I0202 11:12:21.290613 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa7b9268c055f49a90190821e325ea56aabe4d1e95a5720beaec3200554e6d7c" Feb 02 11:12:27 crc kubenswrapper[4925]: I0202 11:12:27.618806 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7bfc86c845-8crkz"] Feb 02 11:12:27 crc kubenswrapper[4925]: E0202 11:12:27.619509 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b04846-e7f8-4fe5-9878-8cc586d96b5a" containerName="pull" Feb 02 11:12:27 crc kubenswrapper[4925]: I0202 11:12:27.619521 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b04846-e7f8-4fe5-9878-8cc586d96b5a" containerName="pull" Feb 02 11:12:27 crc kubenswrapper[4925]: E0202 11:12:27.619532 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b04846-e7f8-4fe5-9878-8cc586d96b5a" containerName="extract" Feb 02 11:12:27 crc kubenswrapper[4925]: I0202 11:12:27.619539 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b04846-e7f8-4fe5-9878-8cc586d96b5a" containerName="extract" Feb 02 11:12:27 crc kubenswrapper[4925]: E0202 11:12:27.619553 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b04846-e7f8-4fe5-9878-8cc586d96b5a" containerName="util" Feb 02 11:12:27 crc kubenswrapper[4925]: I0202 11:12:27.619561 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b04846-e7f8-4fe5-9878-8cc586d96b5a" containerName="util" Feb 02 11:12:27 crc kubenswrapper[4925]: I0202 11:12:27.619688 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2b04846-e7f8-4fe5-9878-8cc586d96b5a" containerName="extract" Feb 02 11:12:27 crc kubenswrapper[4925]: I0202 11:12:27.620048 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7bfc86c845-8crkz" Feb 02 11:12:27 crc kubenswrapper[4925]: I0202 11:12:27.621814 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-zndbt" Feb 02 11:12:27 crc kubenswrapper[4925]: I0202 11:12:27.649446 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7bfc86c845-8crkz"] Feb 02 11:12:27 crc kubenswrapper[4925]: I0202 11:12:27.759014 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxt65\" (UniqueName: \"kubernetes.io/projected/f0498a78-8295-4910-bf25-61219ef0105c-kube-api-access-xxt65\") pod \"openstack-operator-controller-init-7bfc86c845-8crkz\" (UID: \"f0498a78-8295-4910-bf25-61219ef0105c\") " pod="openstack-operators/openstack-operator-controller-init-7bfc86c845-8crkz" Feb 02 11:12:27 crc kubenswrapper[4925]: I0202 11:12:27.860741 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxt65\" (UniqueName: \"kubernetes.io/projected/f0498a78-8295-4910-bf25-61219ef0105c-kube-api-access-xxt65\") pod \"openstack-operator-controller-init-7bfc86c845-8crkz\" (UID: \"f0498a78-8295-4910-bf25-61219ef0105c\") " pod="openstack-operators/openstack-operator-controller-init-7bfc86c845-8crkz" Feb 02 11:12:27 crc kubenswrapper[4925]: I0202 11:12:27.878712 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxt65\" (UniqueName: \"kubernetes.io/projected/f0498a78-8295-4910-bf25-61219ef0105c-kube-api-access-xxt65\") pod \"openstack-operator-controller-init-7bfc86c845-8crkz\" (UID: \"f0498a78-8295-4910-bf25-61219ef0105c\") " pod="openstack-operators/openstack-operator-controller-init-7bfc86c845-8crkz" Feb 02 11:12:27 crc kubenswrapper[4925]: I0202 11:12:27.935368 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7bfc86c845-8crkz" Feb 02 11:12:28 crc kubenswrapper[4925]: I0202 11:12:28.218006 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7bfc86c845-8crkz"] Feb 02 11:12:28 crc kubenswrapper[4925]: I0202 11:12:28.340277 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7bfc86c845-8crkz" event={"ID":"f0498a78-8295-4910-bf25-61219ef0105c","Type":"ContainerStarted","Data":"62c0dfa4c84d8608f608b8069d1c7cb70960bcaa6d0b3b7c728a072e9a417f56"} Feb 02 11:12:32 crc kubenswrapper[4925]: I0202 11:12:32.643017 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-86rdz"] Feb 02 11:12:32 crc kubenswrapper[4925]: I0202 11:12:32.645185 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86rdz" Feb 02 11:12:32 crc kubenswrapper[4925]: I0202 11:12:32.650120 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86rdz"] Feb 02 11:12:32 crc kubenswrapper[4925]: I0202 11:12:32.691630 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-catalog-content\") pod \"certified-operators-86rdz\" (UID: \"f04645b5-9a7c-4bd5-b13d-fe2a3b842450\") " pod="openshift-marketplace/certified-operators-86rdz" Feb 02 11:12:32 crc kubenswrapper[4925]: I0202 11:12:32.691663 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-utilities\") pod \"certified-operators-86rdz\" (UID: \"f04645b5-9a7c-4bd5-b13d-fe2a3b842450\") " pod="openshift-marketplace/certified-operators-86rdz" Feb 02 11:12:32 crc kubenswrapper[4925]: I0202 11:12:32.691707 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ts7l\" (UniqueName: \"kubernetes.io/projected/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-kube-api-access-5ts7l\") pod \"certified-operators-86rdz\" (UID: \"f04645b5-9a7c-4bd5-b13d-fe2a3b842450\") " pod="openshift-marketplace/certified-operators-86rdz" Feb 02 11:12:32 crc kubenswrapper[4925]: I0202 11:12:32.792618 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-catalog-content\") pod \"certified-operators-86rdz\" (UID: \"f04645b5-9a7c-4bd5-b13d-fe2a3b842450\") " pod="openshift-marketplace/certified-operators-86rdz" Feb 02 11:12:32 crc kubenswrapper[4925]: I0202 11:12:32.792982 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-utilities\") pod \"certified-operators-86rdz\" (UID: \"f04645b5-9a7c-4bd5-b13d-fe2a3b842450\") " pod="openshift-marketplace/certified-operators-86rdz" Feb 02 11:12:32 crc kubenswrapper[4925]: I0202 11:12:32.793050 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ts7l\" (UniqueName: \"kubernetes.io/projected/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-kube-api-access-5ts7l\") pod \"certified-operators-86rdz\" (UID: \"f04645b5-9a7c-4bd5-b13d-fe2a3b842450\") " pod="openshift-marketplace/certified-operators-86rdz" Feb 02 11:12:32 crc kubenswrapper[4925]: I0202 11:12:32.793442 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-catalog-content\") pod \"certified-operators-86rdz\" (UID: \"f04645b5-9a7c-4bd5-b13d-fe2a3b842450\") " pod="openshift-marketplace/certified-operators-86rdz" Feb 02 11:12:32 crc kubenswrapper[4925]: I0202 11:12:32.793728 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-utilities\") pod \"certified-operators-86rdz\" (UID: \"f04645b5-9a7c-4bd5-b13d-fe2a3b842450\") " pod="openshift-marketplace/certified-operators-86rdz" Feb 02 11:12:32 crc kubenswrapper[4925]: I0202 11:12:32.815685 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ts7l\" (UniqueName: \"kubernetes.io/projected/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-kube-api-access-5ts7l\") pod \"certified-operators-86rdz\" (UID: \"f04645b5-9a7c-4bd5-b13d-fe2a3b842450\") " pod="openshift-marketplace/certified-operators-86rdz" Feb 02 11:12:33 crc kubenswrapper[4925]: I0202 11:12:33.011920 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86rdz" Feb 02 11:12:33 crc kubenswrapper[4925]: I0202 11:12:33.288939 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86rdz"] Feb 02 11:12:33 crc kubenswrapper[4925]: W0202 11:12:33.297019 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf04645b5_9a7c_4bd5_b13d_fe2a3b842450.slice/crio-47072402034cada9f449cfef6947c7ed585f088c583fba1a9be635c31f723467 WatchSource:0}: Error finding container 47072402034cada9f449cfef6947c7ed585f088c583fba1a9be635c31f723467: Status 404 returned error can't find the container with id 47072402034cada9f449cfef6947c7ed585f088c583fba1a9be635c31f723467 Feb 02 11:12:33 crc kubenswrapper[4925]: I0202 11:12:33.375079 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7bfc86c845-8crkz" event={"ID":"f0498a78-8295-4910-bf25-61219ef0105c","Type":"ContainerStarted","Data":"f3bfce2718919dfe875e8394d6addedc8045e3e2ab42809052db243557e35ec0"} Feb 02 11:12:33 crc kubenswrapper[4925]: I0202 11:12:33.375469 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7bfc86c845-8crkz" Feb 02 11:12:33 crc kubenswrapper[4925]: I0202 11:12:33.377496 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86rdz" event={"ID":"f04645b5-9a7c-4bd5-b13d-fe2a3b842450","Type":"ContainerStarted","Data":"47072402034cada9f449cfef6947c7ed585f088c583fba1a9be635c31f723467"} Feb 02 11:12:33 crc kubenswrapper[4925]: I0202 11:12:33.409597 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7bfc86c845-8crkz" podStartSLOduration=1.9135039059999999 podStartE2EDuration="6.409577821s" podCreationTimestamp="2026-02-02 11:12:27 +0000 UTC" firstStartedPulling="2026-02-02 11:12:28.224625631 +0000 UTC m=+925.228874593" lastFinishedPulling="2026-02-02 11:12:32.720699546 +0000 UTC m=+929.724948508" observedRunningTime="2026-02-02 11:12:33.403010426 +0000 UTC m=+930.407259408" watchObservedRunningTime="2026-02-02 11:12:33.409577821 +0000 UTC m=+930.413826793" Feb 02 11:12:34 crc kubenswrapper[4925]: I0202 11:12:34.383129 4925 generic.go:334] "Generic (PLEG): container finished" podID="f04645b5-9a7c-4bd5-b13d-fe2a3b842450" containerID="efddb6a7dcbd995e53d2a9560b6c388dc06cf7e4a080a3f186eba48c05d82d87" exitCode=0 Feb 02 11:12:34 crc kubenswrapper[4925]: I0202 11:12:34.383203 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86rdz" event={"ID":"f04645b5-9a7c-4bd5-b13d-fe2a3b842450","Type":"ContainerDied","Data":"efddb6a7dcbd995e53d2a9560b6c388dc06cf7e4a080a3f186eba48c05d82d87"} Feb 02 11:12:36 crc kubenswrapper[4925]: I0202 11:12:36.397494 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86rdz" event={"ID":"f04645b5-9a7c-4bd5-b13d-fe2a3b842450","Type":"ContainerStarted","Data":"c8b01f6dc2b70c71cd9b03101b89146732798bf13bb34adcc672c84b72e15d5e"} Feb 02 11:12:37 crc kubenswrapper[4925]: I0202 11:12:37.404832 4925 generic.go:334] "Generic (PLEG): container finished" podID="f04645b5-9a7c-4bd5-b13d-fe2a3b842450" containerID="c8b01f6dc2b70c71cd9b03101b89146732798bf13bb34adcc672c84b72e15d5e" exitCode=0 Feb 02 11:12:37 crc kubenswrapper[4925]: I0202 11:12:37.404881 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86rdz" event={"ID":"f04645b5-9a7c-4bd5-b13d-fe2a3b842450","Type":"ContainerDied","Data":"c8b01f6dc2b70c71cd9b03101b89146732798bf13bb34adcc672c84b72e15d5e"} Feb 02 11:12:37 crc kubenswrapper[4925]: I0202 11:12:37.938229 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7bfc86c845-8crkz" Feb 02 11:12:38 crc kubenswrapper[4925]: I0202 11:12:38.423900 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86rdz" event={"ID":"f04645b5-9a7c-4bd5-b13d-fe2a3b842450","Type":"ContainerStarted","Data":"2cb7dedff5a9add0474c33655650b13e76efa5af901a55890099e51b27191171"} Feb 02 11:12:38 crc kubenswrapper[4925]: I0202 11:12:38.467062 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-86rdz" podStartSLOduration=2.855358851 podStartE2EDuration="6.467037327s" podCreationTimestamp="2026-02-02 11:12:32 +0000 UTC" firstStartedPulling="2026-02-02 11:12:34.385194787 +0000 UTC m=+931.389443749" lastFinishedPulling="2026-02-02 11:12:37.996873263 +0000 UTC m=+935.001122225" observedRunningTime="2026-02-02 11:12:38.460100072 +0000 UTC m=+935.464349024" watchObservedRunningTime="2026-02-02 11:12:38.467037327 +0000 UTC m=+935.471286299" Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.012111 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-86rdz" Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.012507 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-86rdz" Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.051452 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-86rdz" Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.398289 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.398350 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.487689 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-86rdz" Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.636420 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-94cd8"] Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.639053 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94cd8" Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.643590 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-94cd8"] Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.723543 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5jjd\" (UniqueName: \"kubernetes.io/projected/39421b95-3cc2-4396-bb80-a12f4bc41792-kube-api-access-k5jjd\") pod \"redhat-marketplace-94cd8\" (UID: \"39421b95-3cc2-4396-bb80-a12f4bc41792\") " pod="openshift-marketplace/redhat-marketplace-94cd8" Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.723597 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39421b95-3cc2-4396-bb80-a12f4bc41792-utilities\") pod \"redhat-marketplace-94cd8\" (UID: \"39421b95-3cc2-4396-bb80-a12f4bc41792\") " pod="openshift-marketplace/redhat-marketplace-94cd8" Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.723667 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39421b95-3cc2-4396-bb80-a12f4bc41792-catalog-content\") pod \"redhat-marketplace-94cd8\" (UID: \"39421b95-3cc2-4396-bb80-a12f4bc41792\") " pod="openshift-marketplace/redhat-marketplace-94cd8" Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.824560 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5jjd\" (UniqueName: \"kubernetes.io/projected/39421b95-3cc2-4396-bb80-a12f4bc41792-kube-api-access-k5jjd\") pod \"redhat-marketplace-94cd8\" (UID: \"39421b95-3cc2-4396-bb80-a12f4bc41792\") " pod="openshift-marketplace/redhat-marketplace-94cd8" Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.824627 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39421b95-3cc2-4396-bb80-a12f4bc41792-utilities\") pod \"redhat-marketplace-94cd8\" (UID: \"39421b95-3cc2-4396-bb80-a12f4bc41792\") " pod="openshift-marketplace/redhat-marketplace-94cd8" Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.824694 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39421b95-3cc2-4396-bb80-a12f4bc41792-catalog-content\") pod \"redhat-marketplace-94cd8\" (UID: \"39421b95-3cc2-4396-bb80-a12f4bc41792\") " pod="openshift-marketplace/redhat-marketplace-94cd8" Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.920923 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39421b95-3cc2-4396-bb80-a12f4bc41792-catalog-content\") pod \"redhat-marketplace-94cd8\" (UID: \"39421b95-3cc2-4396-bb80-a12f4bc41792\") " pod="openshift-marketplace/redhat-marketplace-94cd8" Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.921812 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39421b95-3cc2-4396-bb80-a12f4bc41792-utilities\") pod \"redhat-marketplace-94cd8\" (UID: \"39421b95-3cc2-4396-bb80-a12f4bc41792\") " pod="openshift-marketplace/redhat-marketplace-94cd8" Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.922103 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5jjd\" (UniqueName: \"kubernetes.io/projected/39421b95-3cc2-4396-bb80-a12f4bc41792-kube-api-access-k5jjd\") pod \"redhat-marketplace-94cd8\" (UID: \"39421b95-3cc2-4396-bb80-a12f4bc41792\") " pod="openshift-marketplace/redhat-marketplace-94cd8" Feb 02 11:12:43 crc kubenswrapper[4925]: I0202 11:12:43.964907 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94cd8" Feb 02 11:12:44 crc kubenswrapper[4925]: I0202 11:12:44.366809 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-94cd8"] Feb 02 11:12:44 crc kubenswrapper[4925]: W0202 11:12:44.371299 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39421b95_3cc2_4396_bb80_a12f4bc41792.slice/crio-97ff0344166b9b420e41ff5f922058f8960e874e1d82495a3b3d3c927df0a225 WatchSource:0}: Error finding container 97ff0344166b9b420e41ff5f922058f8960e874e1d82495a3b3d3c927df0a225: Status 404 returned error can't find the container with id 97ff0344166b9b420e41ff5f922058f8960e874e1d82495a3b3d3c927df0a225 Feb 02 11:12:44 crc kubenswrapper[4925]: I0202 11:12:44.456828 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94cd8" event={"ID":"39421b95-3cc2-4396-bb80-a12f4bc41792","Type":"ContainerStarted","Data":"97ff0344166b9b420e41ff5f922058f8960e874e1d82495a3b3d3c927df0a225"} Feb 02 11:12:45 crc kubenswrapper[4925]: I0202 11:12:45.462841 4925 generic.go:334] "Generic (PLEG): container finished" podID="39421b95-3cc2-4396-bb80-a12f4bc41792" containerID="eddaa1ec739d09d78fa3226e7024bea619921a2fed0e9c5119b79ed57d73ce4f" exitCode=0 Feb 02 11:12:45 crc kubenswrapper[4925]: I0202 11:12:45.462882 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94cd8" event={"ID":"39421b95-3cc2-4396-bb80-a12f4bc41792","Type":"ContainerDied","Data":"eddaa1ec739d09d78fa3226e7024bea619921a2fed0e9c5119b79ed57d73ce4f"} Feb 02 11:12:46 crc kubenswrapper[4925]: E0202 11:12:46.937026 4925 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39421b95_3cc2_4396_bb80_a12f4bc41792.slice/crio-conmon-27e3e62e6a04c1c6c968b132bca12fd8cc5f5fbdc2a08725809ff8583632791d.scope\": RecentStats: unable to find data in memory cache]" Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.042537 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pgvfn"] Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.045299 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgvfn" Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.053759 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pgvfn"] Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.166973 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c8448f-d7a2-473d-9694-402273d86fc9-catalog-content\") pod \"community-operators-pgvfn\" (UID: \"46c8448f-d7a2-473d-9694-402273d86fc9\") " pod="openshift-marketplace/community-operators-pgvfn" Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.167025 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6wzv\" (UniqueName: \"kubernetes.io/projected/46c8448f-d7a2-473d-9694-402273d86fc9-kube-api-access-k6wzv\") pod \"community-operators-pgvfn\" (UID: \"46c8448f-d7a2-473d-9694-402273d86fc9\") " pod="openshift-marketplace/community-operators-pgvfn" Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.167060 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c8448f-d7a2-473d-9694-402273d86fc9-utilities\") pod \"community-operators-pgvfn\" (UID: \"46c8448f-d7a2-473d-9694-402273d86fc9\") " pod="openshift-marketplace/community-operators-pgvfn" Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.225437 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86rdz"] Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.225717 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-86rdz" podUID="f04645b5-9a7c-4bd5-b13d-fe2a3b842450" containerName="registry-server" containerID="cri-o://2cb7dedff5a9add0474c33655650b13e76efa5af901a55890099e51b27191171" gracePeriod=2 Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.267858 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c8448f-d7a2-473d-9694-402273d86fc9-catalog-content\") pod \"community-operators-pgvfn\" (UID: \"46c8448f-d7a2-473d-9694-402273d86fc9\") " pod="openshift-marketplace/community-operators-pgvfn" Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.267907 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6wzv\" (UniqueName: \"kubernetes.io/projected/46c8448f-d7a2-473d-9694-402273d86fc9-kube-api-access-k6wzv\") pod \"community-operators-pgvfn\" (UID: \"46c8448f-d7a2-473d-9694-402273d86fc9\") " pod="openshift-marketplace/community-operators-pgvfn" Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.267929 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c8448f-d7a2-473d-9694-402273d86fc9-utilities\") pod \"community-operators-pgvfn\" (UID: \"46c8448f-d7a2-473d-9694-402273d86fc9\") " pod="openshift-marketplace/community-operators-pgvfn" Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.268542 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c8448f-d7a2-473d-9694-402273d86fc9-catalog-content\") pod \"community-operators-pgvfn\" (UID: \"46c8448f-d7a2-473d-9694-402273d86fc9\") " pod="openshift-marketplace/community-operators-pgvfn" Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.268643 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c8448f-d7a2-473d-9694-402273d86fc9-utilities\") pod \"community-operators-pgvfn\" (UID: \"46c8448f-d7a2-473d-9694-402273d86fc9\") " pod="openshift-marketplace/community-operators-pgvfn" Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.288924 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6wzv\" (UniqueName: \"kubernetes.io/projected/46c8448f-d7a2-473d-9694-402273d86fc9-kube-api-access-k6wzv\") pod \"community-operators-pgvfn\" (UID: \"46c8448f-d7a2-473d-9694-402273d86fc9\") " pod="openshift-marketplace/community-operators-pgvfn" Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.365410 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgvfn" Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.530309 4925 generic.go:334] "Generic (PLEG): container finished" podID="39421b95-3cc2-4396-bb80-a12f4bc41792" containerID="27e3e62e6a04c1c6c968b132bca12fd8cc5f5fbdc2a08725809ff8583632791d" exitCode=0 Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.530381 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94cd8" event={"ID":"39421b95-3cc2-4396-bb80-a12f4bc41792","Type":"ContainerDied","Data":"27e3e62e6a04c1c6c968b132bca12fd8cc5f5fbdc2a08725809ff8583632791d"} Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.554526 4925 generic.go:334] "Generic (PLEG): container finished" podID="f04645b5-9a7c-4bd5-b13d-fe2a3b842450" containerID="2cb7dedff5a9add0474c33655650b13e76efa5af901a55890099e51b27191171" exitCode=0 Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.554569 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86rdz" event={"ID":"f04645b5-9a7c-4bd5-b13d-fe2a3b842450","Type":"ContainerDied","Data":"2cb7dedff5a9add0474c33655650b13e76efa5af901a55890099e51b27191171"} Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.763166 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86rdz" Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.876773 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-catalog-content\") pod \"f04645b5-9a7c-4bd5-b13d-fe2a3b842450\" (UID: \"f04645b5-9a7c-4bd5-b13d-fe2a3b842450\") " Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.876824 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ts7l\" (UniqueName: \"kubernetes.io/projected/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-kube-api-access-5ts7l\") pod \"f04645b5-9a7c-4bd5-b13d-fe2a3b842450\" (UID: \"f04645b5-9a7c-4bd5-b13d-fe2a3b842450\") " Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.876952 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-utilities\") pod \"f04645b5-9a7c-4bd5-b13d-fe2a3b842450\" (UID: \"f04645b5-9a7c-4bd5-b13d-fe2a3b842450\") " Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.877738 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-utilities" (OuterVolumeSpecName: "utilities") pod "f04645b5-9a7c-4bd5-b13d-fe2a3b842450" (UID: "f04645b5-9a7c-4bd5-b13d-fe2a3b842450"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.882153 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-kube-api-access-5ts7l" (OuterVolumeSpecName: "kube-api-access-5ts7l") pod "f04645b5-9a7c-4bd5-b13d-fe2a3b842450" (UID: "f04645b5-9a7c-4bd5-b13d-fe2a3b842450"). InnerVolumeSpecName "kube-api-access-5ts7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:12:47 crc kubenswrapper[4925]: W0202 11:12:47.946193 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46c8448f_d7a2_473d_9694_402273d86fc9.slice/crio-ded248fd3b34902afc1855b7cb4c3f7b70b4e6e5d46791754e70b0b3fc51719d WatchSource:0}: Error finding container ded248fd3b34902afc1855b7cb4c3f7b70b4e6e5d46791754e70b0b3fc51719d: Status 404 returned error can't find the container with id ded248fd3b34902afc1855b7cb4c3f7b70b4e6e5d46791754e70b0b3fc51719d Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.946512 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pgvfn"] Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.978377 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ts7l\" (UniqueName: \"kubernetes.io/projected/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-kube-api-access-5ts7l\") on node \"crc\" DevicePath \"\"" Feb 02 11:12:47 crc kubenswrapper[4925]: I0202 11:12:47.978654 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:12:48 crc kubenswrapper[4925]: I0202 11:12:48.564503 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgvfn" event={"ID":"46c8448f-d7a2-473d-9694-402273d86fc9","Type":"ContainerStarted","Data":"ded248fd3b34902afc1855b7cb4c3f7b70b4e6e5d46791754e70b0b3fc51719d"} Feb 02 11:12:48 crc kubenswrapper[4925]: I0202 11:12:48.566706 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86rdz" event={"ID":"f04645b5-9a7c-4bd5-b13d-fe2a3b842450","Type":"ContainerDied","Data":"47072402034cada9f449cfef6947c7ed585f088c583fba1a9be635c31f723467"} Feb 02 11:12:48 crc kubenswrapper[4925]: I0202 11:12:48.566754 4925 scope.go:117] "RemoveContainer" containerID="2cb7dedff5a9add0474c33655650b13e76efa5af901a55890099e51b27191171" Feb 02 11:12:48 crc kubenswrapper[4925]: I0202 11:12:48.566858 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86rdz" Feb 02 11:12:48 crc kubenswrapper[4925]: I0202 11:12:48.581144 4925 scope.go:117] "RemoveContainer" containerID="c8b01f6dc2b70c71cd9b03101b89146732798bf13bb34adcc672c84b72e15d5e" Feb 02 11:12:48 crc kubenswrapper[4925]: I0202 11:12:48.597320 4925 scope.go:117] "RemoveContainer" containerID="efddb6a7dcbd995e53d2a9560b6c388dc06cf7e4a080a3f186eba48c05d82d87" Feb 02 11:12:49 crc kubenswrapper[4925]: I0202 11:12:49.052743 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f04645b5-9a7c-4bd5-b13d-fe2a3b842450" (UID: "f04645b5-9a7c-4bd5-b13d-fe2a3b842450"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:12:49 crc kubenswrapper[4925]: I0202 11:12:49.093695 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04645b5-9a7c-4bd5-b13d-fe2a3b842450-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:12:49 crc kubenswrapper[4925]: I0202 11:12:49.236298 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86rdz"] Feb 02 11:12:49 crc kubenswrapper[4925]: I0202 11:12:49.240571 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-86rdz"] Feb 02 11:12:49 crc kubenswrapper[4925]: I0202 11:12:49.573261 4925 generic.go:334] "Generic (PLEG): container finished" podID="46c8448f-d7a2-473d-9694-402273d86fc9" containerID="df2a7b76927888fbac040ef82812950e5f4432bedb3f6e0740ebb5753d8fc6fd" exitCode=0 Feb 02 11:12:49 crc kubenswrapper[4925]: I0202 11:12:49.573330 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgvfn" event={"ID":"46c8448f-d7a2-473d-9694-402273d86fc9","Type":"ContainerDied","Data":"df2a7b76927888fbac040ef82812950e5f4432bedb3f6e0740ebb5753d8fc6fd"} Feb 02 11:12:50 crc kubenswrapper[4925]: I0202 11:12:50.670110 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f04645b5-9a7c-4bd5-b13d-fe2a3b842450" path="/var/lib/kubelet/pods/f04645b5-9a7c-4bd5-b13d-fe2a3b842450/volumes" Feb 02 11:12:54 crc kubenswrapper[4925]: I0202 11:12:54.604426 4925 generic.go:334] "Generic (PLEG): container finished" podID="46c8448f-d7a2-473d-9694-402273d86fc9" containerID="3beb8b23835fdc850ead2f4d106e042cc52a493089ed2e391a8a4f16b9b1ec13" exitCode=0 Feb 02 11:12:54 crc kubenswrapper[4925]: I0202 11:12:54.604533 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgvfn" event={"ID":"46c8448f-d7a2-473d-9694-402273d86fc9","Type":"ContainerDied","Data":"3beb8b23835fdc850ead2f4d106e042cc52a493089ed2e391a8a4f16b9b1ec13"} Feb 02 11:12:54 crc kubenswrapper[4925]: I0202 11:12:54.607219 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94cd8" event={"ID":"39421b95-3cc2-4396-bb80-a12f4bc41792","Type":"ContainerStarted","Data":"c215ae3cdb2d60bdf907d5616c33976fa15fdd4ef857b4e0f0cec2bcf1ac654a"} Feb 02 11:12:54 crc kubenswrapper[4925]: I0202 11:12:54.635799 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-94cd8" podStartSLOduration=5.164940151 podStartE2EDuration="11.635777815s" podCreationTimestamp="2026-02-02 11:12:43 +0000 UTC" firstStartedPulling="2026-02-02 11:12:45.464034056 +0000 UTC m=+942.468283018" lastFinishedPulling="2026-02-02 11:12:51.9348717 +0000 UTC m=+948.939120682" observedRunningTime="2026-02-02 11:12:54.634577474 +0000 UTC m=+951.638826436" watchObservedRunningTime="2026-02-02 11:12:54.635777815 +0000 UTC m=+951.640026777" Feb 02 11:12:56 crc kubenswrapper[4925]: I0202 11:12:56.624939 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgvfn" event={"ID":"46c8448f-d7a2-473d-9694-402273d86fc9","Type":"ContainerStarted","Data":"d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f"} Feb 02 11:12:56 crc kubenswrapper[4925]: I0202 11:12:56.656770 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pgvfn" podStartSLOduration=2.945931334 podStartE2EDuration="9.656748415s" podCreationTimestamp="2026-02-02 11:12:47 +0000 UTC" firstStartedPulling="2026-02-02 11:12:49.57458579 +0000 UTC m=+946.578834752" lastFinishedPulling="2026-02-02 11:12:56.285402881 +0000 UTC m=+953.289651833" observedRunningTime="2026-02-02 11:12:56.653062027 +0000 UTC m=+953.657310999" watchObservedRunningTime="2026-02-02 11:12:56.656748415 +0000 UTC m=+953.660997377" Feb 02 11:12:57 crc kubenswrapper[4925]: I0202 11:12:57.366019 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pgvfn" Feb 02 11:12:57 crc kubenswrapper[4925]: I0202 11:12:57.366153 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pgvfn" Feb 02 11:12:58 crc kubenswrapper[4925]: I0202 11:12:58.411258 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pgvfn" podUID="46c8448f-d7a2-473d-9694-402273d86fc9" containerName="registry-server" probeResult="failure" output=< Feb 02 11:12:58 crc kubenswrapper[4925]: timeout: failed to connect service ":50051" within 1s Feb 02 11:12:58 crc kubenswrapper[4925]: > Feb 02 11:13:03 crc kubenswrapper[4925]: I0202 11:13:03.965871 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-94cd8" Feb 02 11:13:03 crc kubenswrapper[4925]: I0202 11:13:03.966409 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-94cd8" Feb 02 11:13:04 crc kubenswrapper[4925]: I0202 11:13:04.015096 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-94cd8" Feb 02 11:13:04 crc kubenswrapper[4925]: I0202 11:13:04.717849 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-94cd8" Feb 02 11:13:06 crc kubenswrapper[4925]: I0202 11:13:06.427142 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-94cd8"] Feb 02 11:13:06 crc kubenswrapper[4925]: I0202 11:13:06.690500 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-94cd8" podUID="39421b95-3cc2-4396-bb80-a12f4bc41792" containerName="registry-server" containerID="cri-o://c215ae3cdb2d60bdf907d5616c33976fa15fdd4ef857b4e0f0cec2bcf1ac654a" gracePeriod=2 Feb 02 11:13:07 crc kubenswrapper[4925]: I0202 11:13:07.446718 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pgvfn" Feb 02 11:13:07 crc kubenswrapper[4925]: I0202 11:13:07.496697 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pgvfn" Feb 02 11:13:07 crc kubenswrapper[4925]: I0202 11:13:07.718274 4925 generic.go:334] "Generic (PLEG): container finished" podID="39421b95-3cc2-4396-bb80-a12f4bc41792" containerID="c215ae3cdb2d60bdf907d5616c33976fa15fdd4ef857b4e0f0cec2bcf1ac654a" exitCode=0 Feb 02 11:13:07 crc kubenswrapper[4925]: I0202 11:13:07.718366 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94cd8" event={"ID":"39421b95-3cc2-4396-bb80-a12f4bc41792","Type":"ContainerDied","Data":"c215ae3cdb2d60bdf907d5616c33976fa15fdd4ef857b4e0f0cec2bcf1ac654a"} Feb 02 11:13:07 crc kubenswrapper[4925]: I0202 11:13:07.718456 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94cd8" event={"ID":"39421b95-3cc2-4396-bb80-a12f4bc41792","Type":"ContainerDied","Data":"97ff0344166b9b420e41ff5f922058f8960e874e1d82495a3b3d3c927df0a225"} Feb 02 11:13:07 crc kubenswrapper[4925]: I0202 11:13:07.718473 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97ff0344166b9b420e41ff5f922058f8960e874e1d82495a3b3d3c927df0a225" Feb 02 11:13:07 crc kubenswrapper[4925]: I0202 11:13:07.721696 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94cd8" Feb 02 11:13:07 crc kubenswrapper[4925]: I0202 11:13:07.830704 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39421b95-3cc2-4396-bb80-a12f4bc41792-catalog-content\") pod \"39421b95-3cc2-4396-bb80-a12f4bc41792\" (UID: \"39421b95-3cc2-4396-bb80-a12f4bc41792\") " Feb 02 11:13:07 crc kubenswrapper[4925]: I0202 11:13:07.830807 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5jjd\" (UniqueName: \"kubernetes.io/projected/39421b95-3cc2-4396-bb80-a12f4bc41792-kube-api-access-k5jjd\") pod \"39421b95-3cc2-4396-bb80-a12f4bc41792\" (UID: \"39421b95-3cc2-4396-bb80-a12f4bc41792\") " Feb 02 11:13:07 crc kubenswrapper[4925]: I0202 11:13:07.830835 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39421b95-3cc2-4396-bb80-a12f4bc41792-utilities\") pod \"39421b95-3cc2-4396-bb80-a12f4bc41792\" (UID: \"39421b95-3cc2-4396-bb80-a12f4bc41792\") " Feb 02 11:13:07 crc kubenswrapper[4925]: I0202 11:13:07.832229 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39421b95-3cc2-4396-bb80-a12f4bc41792-utilities" (OuterVolumeSpecName: "utilities") pod "39421b95-3cc2-4396-bb80-a12f4bc41792" (UID: "39421b95-3cc2-4396-bb80-a12f4bc41792"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:13:07 crc kubenswrapper[4925]: I0202 11:13:07.854504 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39421b95-3cc2-4396-bb80-a12f4bc41792-kube-api-access-k5jjd" (OuterVolumeSpecName: "kube-api-access-k5jjd") pod "39421b95-3cc2-4396-bb80-a12f4bc41792" (UID: "39421b95-3cc2-4396-bb80-a12f4bc41792"). InnerVolumeSpecName "kube-api-access-k5jjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:13:07 crc kubenswrapper[4925]: I0202 11:13:07.863120 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39421b95-3cc2-4396-bb80-a12f4bc41792-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39421b95-3cc2-4396-bb80-a12f4bc41792" (UID: "39421b95-3cc2-4396-bb80-a12f4bc41792"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:13:07 crc kubenswrapper[4925]: I0202 11:13:07.932474 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5jjd\" (UniqueName: \"kubernetes.io/projected/39421b95-3cc2-4396-bb80-a12f4bc41792-kube-api-access-k5jjd\") on node \"crc\" DevicePath \"\"" Feb 02 11:13:07 crc kubenswrapper[4925]: I0202 11:13:07.932512 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39421b95-3cc2-4396-bb80-a12f4bc41792-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:13:07 crc kubenswrapper[4925]: I0202 11:13:07.932523 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39421b95-3cc2-4396-bb80-a12f4bc41792-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:13:08 crc kubenswrapper[4925]: I0202 11:13:08.723624 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94cd8" Feb 02 11:13:08 crc kubenswrapper[4925]: I0202 11:13:08.752487 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-94cd8"] Feb 02 11:13:08 crc kubenswrapper[4925]: I0202 11:13:08.756520 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-94cd8"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.303009 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5qxgq"] Feb 02 11:13:09 crc kubenswrapper[4925]: E0202 11:13:09.303339 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04645b5-9a7c-4bd5-b13d-fe2a3b842450" containerName="extract-content" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.303360 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04645b5-9a7c-4bd5-b13d-fe2a3b842450" containerName="extract-content" Feb 02 11:13:09 crc kubenswrapper[4925]: E0202 11:13:09.303382 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39421b95-3cc2-4396-bb80-a12f4bc41792" containerName="extract-utilities" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.303390 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="39421b95-3cc2-4396-bb80-a12f4bc41792" containerName="extract-utilities" Feb 02 11:13:09 crc kubenswrapper[4925]: E0202 11:13:09.303407 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39421b95-3cc2-4396-bb80-a12f4bc41792" containerName="extract-content" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.303417 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="39421b95-3cc2-4396-bb80-a12f4bc41792" containerName="extract-content" Feb 02 11:13:09 crc kubenswrapper[4925]: E0202 11:13:09.303426 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39421b95-3cc2-4396-bb80-a12f4bc41792" containerName="registry-server" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.303433 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="39421b95-3cc2-4396-bb80-a12f4bc41792" containerName="registry-server" Feb 02 11:13:09 crc kubenswrapper[4925]: E0202 11:13:09.303444 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04645b5-9a7c-4bd5-b13d-fe2a3b842450" containerName="registry-server" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.303451 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04645b5-9a7c-4bd5-b13d-fe2a3b842450" containerName="registry-server" Feb 02 11:13:09 crc kubenswrapper[4925]: E0202 11:13:09.303463 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04645b5-9a7c-4bd5-b13d-fe2a3b842450" containerName="extract-utilities" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.303470 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04645b5-9a7c-4bd5-b13d-fe2a3b842450" containerName="extract-utilities" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.303598 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="39421b95-3cc2-4396-bb80-a12f4bc41792" containerName="registry-server" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.303613 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04645b5-9a7c-4bd5-b13d-fe2a3b842450" containerName="registry-server" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.304155 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5qxgq" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.309419 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-dm2zk" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.316755 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5qxgq"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.331014 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-zvg88"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.331973 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-zvg88" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.336827 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-d24ms" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.336835 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-56b8d567c6-9sb76"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.338006 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-56b8d567c6-9sb76" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.340847 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-vq97x" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.350128 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-zvg88"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.354547 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-56b8d567c6-9sb76"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.384847 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-swkbc"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.386320 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-swkbc" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.391624 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-zd6d2" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.401128 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-wggcm"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.402287 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wggcm" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.404570 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-br8dm" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.408167 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-swkbc"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.417674 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-mfxvn"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.418676 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mfxvn" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.425000 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-z72rn" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.448910 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-wggcm"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.456474 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc5m6\" (UniqueName: \"kubernetes.io/projected/057f6b87-28a7-46c6-8d51-c32937d77a6a-kube-api-access-qc5m6\") pod \"cinder-operator-controller-manager-56b8d567c6-9sb76\" (UID: \"057f6b87-28a7-46c6-8d51-c32937d77a6a\") " pod="openstack-operators/cinder-operator-controller-manager-56b8d567c6-9sb76" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.456849 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbcbt\" (UniqueName: \"kubernetes.io/projected/271532e8-0b2a-40bc-b982-56e6c0c706dc-kube-api-access-vbcbt\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-5qxgq\" (UID: \"271532e8-0b2a-40bc-b982-56e6c0c706dc\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5qxgq" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.457104 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpw6f\" (UniqueName: \"kubernetes.io/projected/6f64f1b5-8b8f-48b6-934c-5d148565b151-kube-api-access-vpw6f\") pod \"designate-operator-controller-manager-6d9697b7f4-zvg88\" (UID: \"6f64f1b5-8b8f-48b6-934c-5d148565b151\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-zvg88" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.458962 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-mfxvn"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.502376 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.503532 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.508617 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.512033 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-sql8n" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.528031 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.539424 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fgf8c"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.540425 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fgf8c" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.549533 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-zp7tv" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.550142 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-kbc5t"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.551173 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kbc5t" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.555545 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-n8cts" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.564239 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx87b\" (UniqueName: \"kubernetes.io/projected/714728e3-dda9-47d3-aca5-c9bf8a13c2eb-kube-api-access-mx87b\") pod \"ironic-operator-controller-manager-5f4b8bd54d-fgf8c\" (UID: \"714728e3-dda9-47d3-aca5-c9bf8a13c2eb\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fgf8c" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.564494 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc5m6\" (UniqueName: \"kubernetes.io/projected/057f6b87-28a7-46c6-8d51-c32937d77a6a-kube-api-access-qc5m6\") pod \"cinder-operator-controller-manager-56b8d567c6-9sb76\" (UID: \"057f6b87-28a7-46c6-8d51-c32937d77a6a\") " pod="openstack-operators/cinder-operator-controller-manager-56b8d567c6-9sb76" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.564593 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbcbt\" (UniqueName: \"kubernetes.io/projected/271532e8-0b2a-40bc-b982-56e6c0c706dc-kube-api-access-vbcbt\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-5qxgq\" (UID: \"271532e8-0b2a-40bc-b982-56e6c0c706dc\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5qxgq" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.564687 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22pwf\" (UniqueName: \"kubernetes.io/projected/e6ccf8c1-dcaf-49c7-84d9-dada6d7fec73-kube-api-access-22pwf\") pod \"glance-operator-controller-manager-8886f4c47-swkbc\" (UID: \"e6ccf8c1-dcaf-49c7-84d9-dada6d7fec73\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-swkbc" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.564763 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjgnh\" (UniqueName: \"kubernetes.io/projected/8405a39c-7526-47b8-93b8-b9bb03cb970b-kube-api-access-pjgnh\") pod \"horizon-operator-controller-manager-5fb775575f-mfxvn\" (UID: \"8405a39c-7526-47b8-93b8-b9bb03cb970b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mfxvn" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.564848 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndd72\" (UniqueName: \"kubernetes.io/projected/9b6aadaa-89ca-46f2-bf48-59726671b789-kube-api-access-ndd72\") pod \"infra-operator-controller-manager-79955696d6-m9rb5\" (UID: \"9b6aadaa-89ca-46f2-bf48-59726671b789\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.564946 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpw6f\" (UniqueName: \"kubernetes.io/projected/6f64f1b5-8b8f-48b6-934c-5d148565b151-kube-api-access-vpw6f\") pod \"designate-operator-controller-manager-6d9697b7f4-zvg88\" (UID: \"6f64f1b5-8b8f-48b6-934c-5d148565b151\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-zvg88" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.565049 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert\") pod \"infra-operator-controller-manager-79955696d6-m9rb5\" (UID: \"9b6aadaa-89ca-46f2-bf48-59726671b789\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.565180 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krmnp\" (UniqueName: \"kubernetes.io/projected/a8a71810-ebcf-4908-8e41-73fdce287188-kube-api-access-krmnp\") pod \"keystone-operator-controller-manager-84f48565d4-kbc5t\" (UID: \"a8a71810-ebcf-4908-8e41-73fdce287188\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kbc5t" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.565342 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wlfp\" (UniqueName: \"kubernetes.io/projected/2670eaa9-d6c1-479d-98d1-9a86c0c09305-kube-api-access-2wlfp\") pod \"heat-operator-controller-manager-69d6db494d-wggcm\" (UID: \"2670eaa9-d6c1-479d-98d1-9a86c0c09305\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wggcm" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.593931 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fgf8c"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.607191 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc5m6\" (UniqueName: \"kubernetes.io/projected/057f6b87-28a7-46c6-8d51-c32937d77a6a-kube-api-access-qc5m6\") pod \"cinder-operator-controller-manager-56b8d567c6-9sb76\" (UID: \"057f6b87-28a7-46c6-8d51-c32937d77a6a\") " pod="openstack-operators/cinder-operator-controller-manager-56b8d567c6-9sb76" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.610981 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-kbc5t"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.626628 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpw6f\" (UniqueName: \"kubernetes.io/projected/6f64f1b5-8b8f-48b6-934c-5d148565b151-kube-api-access-vpw6f\") pod \"designate-operator-controller-manager-6d9697b7f4-zvg88\" (UID: \"6f64f1b5-8b8f-48b6-934c-5d148565b151\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-zvg88" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.627587 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbcbt\" (UniqueName: \"kubernetes.io/projected/271532e8-0b2a-40bc-b982-56e6c0c706dc-kube-api-access-vbcbt\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-5qxgq\" (UID: \"271532e8-0b2a-40bc-b982-56e6c0c706dc\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5qxgq" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.633397 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5qxgq" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.653325 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-f9rbf"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.654417 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-f9rbf" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.655587 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-zvg88" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.662948 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-h85vr" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.694470 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-56b8d567c6-9sb76" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.732364 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c28q8\" (UniqueName: \"kubernetes.io/projected/6db50ed1-76a9-48ad-b08e-07edd9d07421-kube-api-access-c28q8\") pod \"manila-operator-controller-manager-7dd968899f-f9rbf\" (UID: \"6db50ed1-76a9-48ad-b08e-07edd9d07421\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-f9rbf" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.733464 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjgnh\" (UniqueName: \"kubernetes.io/projected/8405a39c-7526-47b8-93b8-b9bb03cb970b-kube-api-access-pjgnh\") pod \"horizon-operator-controller-manager-5fb775575f-mfxvn\" (UID: \"8405a39c-7526-47b8-93b8-b9bb03cb970b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mfxvn" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.733512 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-f9rbf"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.736911 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndd72\" (UniqueName: \"kubernetes.io/projected/9b6aadaa-89ca-46f2-bf48-59726671b789-kube-api-access-ndd72\") pod \"infra-operator-controller-manager-79955696d6-m9rb5\" (UID: \"9b6aadaa-89ca-46f2-bf48-59726671b789\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.737028 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert\") pod \"infra-operator-controller-manager-79955696d6-m9rb5\" (UID: \"9b6aadaa-89ca-46f2-bf48-59726671b789\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.737068 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krmnp\" (UniqueName: \"kubernetes.io/projected/a8a71810-ebcf-4908-8e41-73fdce287188-kube-api-access-krmnp\") pod \"keystone-operator-controller-manager-84f48565d4-kbc5t\" (UID: \"a8a71810-ebcf-4908-8e41-73fdce287188\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kbc5t" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.737183 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wlfp\" (UniqueName: \"kubernetes.io/projected/2670eaa9-d6c1-479d-98d1-9a86c0c09305-kube-api-access-2wlfp\") pod \"heat-operator-controller-manager-69d6db494d-wggcm\" (UID: \"2670eaa9-d6c1-479d-98d1-9a86c0c09305\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wggcm" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.737307 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx87b\" (UniqueName: \"kubernetes.io/projected/714728e3-dda9-47d3-aca5-c9bf8a13c2eb-kube-api-access-mx87b\") pod \"ironic-operator-controller-manager-5f4b8bd54d-fgf8c\" (UID: \"714728e3-dda9-47d3-aca5-c9bf8a13c2eb\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fgf8c" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.737454 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22pwf\" (UniqueName: \"kubernetes.io/projected/e6ccf8c1-dcaf-49c7-84d9-dada6d7fec73-kube-api-access-22pwf\") pod \"glance-operator-controller-manager-8886f4c47-swkbc\" (UID: \"e6ccf8c1-dcaf-49c7-84d9-dada6d7fec73\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-swkbc" Feb 02 11:13:09 crc kubenswrapper[4925]: E0202 11:13:09.738060 4925 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 11:13:09 crc kubenswrapper[4925]: E0202 11:13:09.738184 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert podName:9b6aadaa-89ca-46f2-bf48-59726671b789 nodeName:}" failed. No retries permitted until 2026-02-02 11:13:10.238161821 +0000 UTC m=+967.242410783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert") pod "infra-operator-controller-manager-79955696d6-m9rb5" (UID: "9b6aadaa-89ca-46f2-bf48-59726671b789") : secret "infra-operator-webhook-server-cert" not found Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.759458 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krmnp\" (UniqueName: \"kubernetes.io/projected/a8a71810-ebcf-4908-8e41-73fdce287188-kube-api-access-krmnp\") pod \"keystone-operator-controller-manager-84f48565d4-kbc5t\" (UID: \"a8a71810-ebcf-4908-8e41-73fdce287188\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kbc5t" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.764330 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx87b\" (UniqueName: \"kubernetes.io/projected/714728e3-dda9-47d3-aca5-c9bf8a13c2eb-kube-api-access-mx87b\") pod \"ironic-operator-controller-manager-5f4b8bd54d-fgf8c\" (UID: \"714728e3-dda9-47d3-aca5-c9bf8a13c2eb\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fgf8c" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.765526 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wlfp\" (UniqueName: \"kubernetes.io/projected/2670eaa9-d6c1-479d-98d1-9a86c0c09305-kube-api-access-2wlfp\") pod \"heat-operator-controller-manager-69d6db494d-wggcm\" (UID: \"2670eaa9-d6c1-479d-98d1-9a86c0c09305\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wggcm" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.767206 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndd72\" (UniqueName: \"kubernetes.io/projected/9b6aadaa-89ca-46f2-bf48-59726671b789-kube-api-access-ndd72\") pod \"infra-operator-controller-manager-79955696d6-m9rb5\" (UID: \"9b6aadaa-89ca-46f2-bf48-59726671b789\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.768170 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22pwf\" (UniqueName: \"kubernetes.io/projected/e6ccf8c1-dcaf-49c7-84d9-dada6d7fec73-kube-api-access-22pwf\") pod \"glance-operator-controller-manager-8886f4c47-swkbc\" (UID: \"e6ccf8c1-dcaf-49c7-84d9-dada6d7fec73\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-swkbc" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.775513 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjgnh\" (UniqueName: \"kubernetes.io/projected/8405a39c-7526-47b8-93b8-b9bb03cb970b-kube-api-access-pjgnh\") pod \"horizon-operator-controller-manager-5fb775575f-mfxvn\" (UID: \"8405a39c-7526-47b8-93b8-b9bb03cb970b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mfxvn" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.807178 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-v4m7x"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.808055 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-v4m7x" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.811502 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-8p8kv" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.822164 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-8nf8m"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.824231 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8nf8m" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.830046 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-rxb8p" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.836566 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-bfkmp"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.837490 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bfkmp" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.838517 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c28q8\" (UniqueName: \"kubernetes.io/projected/6db50ed1-76a9-48ad-b08e-07edd9d07421-kube-api-access-c28q8\") pod \"manila-operator-controller-manager-7dd968899f-f9rbf\" (UID: \"6db50ed1-76a9-48ad-b08e-07edd9d07421\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-f9rbf" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.838982 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-v2n6t" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.847982 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-v4m7x"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.859389 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c28q8\" (UniqueName: \"kubernetes.io/projected/6db50ed1-76a9-48ad-b08e-07edd9d07421-kube-api-access-c28q8\") pod \"manila-operator-controller-manager-7dd968899f-f9rbf\" (UID: \"6db50ed1-76a9-48ad-b08e-07edd9d07421\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-f9rbf" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.864802 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-zksqs"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.865838 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zksqs" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.869967 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-kv6hf" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.871247 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-8nf8m"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.874503 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fgf8c" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.876964 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.878981 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.884042 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-q4pd6" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.884207 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.886997 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kbc5t" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.896939 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-bfkmp"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.911191 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-zrg4p"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.923398 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-zrg4p" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.934243 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-kpk7c" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.939908 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whwnc\" (UniqueName: \"kubernetes.io/projected/2d3514fc-34cd-4021-a4d9-662abe6bb56e-kube-api-access-whwnc\") pod \"mariadb-operator-controller-manager-67bf948998-v4m7x\" (UID: \"2d3514fc-34cd-4021-a4d9-662abe6bb56e\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-v4m7x" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.940037 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79qdh\" (UniqueName: \"kubernetes.io/projected/7b8e50f8-9611-4be4-aa4e-a0834ec27a24-kube-api-access-79qdh\") pod \"neutron-operator-controller-manager-585dbc889-8nf8m\" (UID: \"7b8e50f8-9611-4be4-aa4e-a0834ec27a24\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8nf8m" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.940071 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72rdp\" (UniqueName: \"kubernetes.io/projected/85d89138-ff2c-4e69-bd55-bf6b2648d286-kube-api-access-72rdp\") pod \"octavia-operator-controller-manager-6687f8d877-zksqs\" (UID: \"85d89138-ff2c-4e69-bd55-bf6b2648d286\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zksqs" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.940131 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxnjh\" (UniqueName: \"kubernetes.io/projected/252fe85c-1645-4a4b-bd66-efe5814e9b09-kube-api-access-lxnjh\") pod \"nova-operator-controller-manager-55bff696bd-bfkmp\" (UID: \"252fe85c-1645-4a4b-bd66-efe5814e9b09\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bfkmp" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.941413 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-5rz7t"] Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.943846 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-5rz7t" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.947526 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-c6sxq" Feb 02 11:13:09 crc kubenswrapper[4925]: I0202 11:13:09.994202 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-zksqs"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.011184 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-5rz7t"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.013694 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-swkbc" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.035193 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-zrg4p"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.051962 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wggcm" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.054294 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whwnc\" (UniqueName: \"kubernetes.io/projected/2d3514fc-34cd-4021-a4d9-662abe6bb56e-kube-api-access-whwnc\") pod \"mariadb-operator-controller-manager-67bf948998-v4m7x\" (UID: \"2d3514fc-34cd-4021-a4d9-662abe6bb56e\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-v4m7x" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.054363 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg9vq\" (UniqueName: \"kubernetes.io/projected/a4e64115-b62c-421f-8072-88fc52eef59e-kube-api-access-jg9vq\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8\" (UID: \"a4e64115-b62c-421f-8072-88fc52eef59e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.054399 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7bsn\" (UniqueName: \"kubernetes.io/projected/88bf0458-e0ab-4b1b-ad4d-01e0f51780e8-kube-api-access-q7bsn\") pod \"ovn-operator-controller-manager-788c46999f-zrg4p\" (UID: \"88bf0458-e0ab-4b1b-ad4d-01e0f51780e8\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-zrg4p" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.054446 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8\" (UID: \"a4e64115-b62c-421f-8072-88fc52eef59e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.054473 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm99s\" (UniqueName: \"kubernetes.io/projected/e11ef3f5-cbad-483b-a5a6-dedfb5ec556f-kube-api-access-dm99s\") pod \"placement-operator-controller-manager-5b964cf4cd-5rz7t\" (UID: \"e11ef3f5-cbad-483b-a5a6-dedfb5ec556f\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-5rz7t" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.054509 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79qdh\" (UniqueName: \"kubernetes.io/projected/7b8e50f8-9611-4be4-aa4e-a0834ec27a24-kube-api-access-79qdh\") pod \"neutron-operator-controller-manager-585dbc889-8nf8m\" (UID: \"7b8e50f8-9611-4be4-aa4e-a0834ec27a24\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8nf8m" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.054535 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72rdp\" (UniqueName: \"kubernetes.io/projected/85d89138-ff2c-4e69-bd55-bf6b2648d286-kube-api-access-72rdp\") pod \"octavia-operator-controller-manager-6687f8d877-zksqs\" (UID: \"85d89138-ff2c-4e69-bd55-bf6b2648d286\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zksqs" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.054559 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxnjh\" (UniqueName: \"kubernetes.io/projected/252fe85c-1645-4a4b-bd66-efe5814e9b09-kube-api-access-lxnjh\") pod \"nova-operator-controller-manager-55bff696bd-bfkmp\" (UID: \"252fe85c-1645-4a4b-bd66-efe5814e9b09\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bfkmp" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.059066 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mfxvn" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.068608 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-f9rbf" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.082163 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.086438 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79qdh\" (UniqueName: \"kubernetes.io/projected/7b8e50f8-9611-4be4-aa4e-a0834ec27a24-kube-api-access-79qdh\") pod \"neutron-operator-controller-manager-585dbc889-8nf8m\" (UID: \"7b8e50f8-9611-4be4-aa4e-a0834ec27a24\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8nf8m" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.092988 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whwnc\" (UniqueName: \"kubernetes.io/projected/2d3514fc-34cd-4021-a4d9-662abe6bb56e-kube-api-access-whwnc\") pod \"mariadb-operator-controller-manager-67bf948998-v4m7x\" (UID: \"2d3514fc-34cd-4021-a4d9-662abe6bb56e\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-v4m7x" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.109151 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxnjh\" (UniqueName: \"kubernetes.io/projected/252fe85c-1645-4a4b-bd66-efe5814e9b09-kube-api-access-lxnjh\") pod \"nova-operator-controller-manager-55bff696bd-bfkmp\" (UID: \"252fe85c-1645-4a4b-bd66-efe5814e9b09\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bfkmp" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.117958 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-4lhnh"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.118833 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-4lhnh" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.122131 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72rdp\" (UniqueName: \"kubernetes.io/projected/85d89138-ff2c-4e69-bd55-bf6b2648d286-kube-api-access-72rdp\") pod \"octavia-operator-controller-manager-6687f8d877-zksqs\" (UID: \"85d89138-ff2c-4e69-bd55-bf6b2648d286\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zksqs" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.126617 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-bz247" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.151246 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-4lhnh"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.156300 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg9vq\" (UniqueName: \"kubernetes.io/projected/a4e64115-b62c-421f-8072-88fc52eef59e-kube-api-access-jg9vq\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8\" (UID: \"a4e64115-b62c-421f-8072-88fc52eef59e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.156343 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7bsn\" (UniqueName: \"kubernetes.io/projected/88bf0458-e0ab-4b1b-ad4d-01e0f51780e8-kube-api-access-q7bsn\") pod \"ovn-operator-controller-manager-788c46999f-zrg4p\" (UID: \"88bf0458-e0ab-4b1b-ad4d-01e0f51780e8\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-zrg4p" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.156393 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8\" (UID: \"a4e64115-b62c-421f-8072-88fc52eef59e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.156420 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm99s\" (UniqueName: \"kubernetes.io/projected/e11ef3f5-cbad-483b-a5a6-dedfb5ec556f-kube-api-access-dm99s\") pod \"placement-operator-controller-manager-5b964cf4cd-5rz7t\" (UID: \"e11ef3f5-cbad-483b-a5a6-dedfb5ec556f\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-5rz7t" Feb 02 11:13:10 crc kubenswrapper[4925]: E0202 11:13:10.156941 4925 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 11:13:10 crc kubenswrapper[4925]: E0202 11:13:10.156993 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert podName:a4e64115-b62c-421f-8072-88fc52eef59e nodeName:}" failed. No retries permitted until 2026-02-02 11:13:10.656973938 +0000 UTC m=+967.661222900 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" (UID: "a4e64115-b62c-421f-8072-88fc52eef59e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.157275 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-v4m7x" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.192644 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm99s\" (UniqueName: \"kubernetes.io/projected/e11ef3f5-cbad-483b-a5a6-dedfb5ec556f-kube-api-access-dm99s\") pod \"placement-operator-controller-manager-5b964cf4cd-5rz7t\" (UID: \"e11ef3f5-cbad-483b-a5a6-dedfb5ec556f\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-5rz7t" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.193274 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7bsn\" (UniqueName: \"kubernetes.io/projected/88bf0458-e0ab-4b1b-ad4d-01e0f51780e8-kube-api-access-q7bsn\") pod \"ovn-operator-controller-manager-788c46999f-zrg4p\" (UID: \"88bf0458-e0ab-4b1b-ad4d-01e0f51780e8\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-zrg4p" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.206137 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg9vq\" (UniqueName: \"kubernetes.io/projected/a4e64115-b62c-421f-8072-88fc52eef59e-kube-api-access-jg9vq\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8\" (UID: \"a4e64115-b62c-421f-8072-88fc52eef59e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.206206 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-k579v"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.207599 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-k579v" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.211308 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-njgl7" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.218452 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-k579v"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.230824 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-8mpnq"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.231599 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-8mpnq" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.234769 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-h6pq9" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.235122 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-8mpnq"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.239257 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8nf8m" Feb 02 11:13:10 crc kubenswrapper[4925]: W0202 11:13:10.242579 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod271532e8_0b2a_40bc_b982_56e6c0c706dc.slice/crio-ff46726e7b9003e8ec39b63432a9164fad20b937ef54cd19e21227a7290a0f76 WatchSource:0}: Error finding container ff46726e7b9003e8ec39b63432a9164fad20b937ef54cd19e21227a7290a0f76: Status 404 returned error can't find the container with id ff46726e7b9003e8ec39b63432a9164fad20b937ef54cd19e21227a7290a0f76 Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.262363 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx2gd\" (UniqueName: \"kubernetes.io/projected/fc69d485-23dc-4c0c-88ef-9fc6729d977d-kube-api-access-vx2gd\") pod \"swift-operator-controller-manager-68fc8c869-4lhnh\" (UID: \"fc69d485-23dc-4c0c-88ef-9fc6729d977d\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-4lhnh" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.262439 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert\") pod \"infra-operator-controller-manager-79955696d6-m9rb5\" (UID: \"9b6aadaa-89ca-46f2-bf48-59726671b789\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.262463 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5f7n\" (UniqueName: \"kubernetes.io/projected/ae37dc52-0e8c-41b3-9c07-7ce321c5e2a0-kube-api-access-r5f7n\") pod \"telemetry-operator-controller-manager-64b5b76f97-k579v\" (UID: \"ae37dc52-0e8c-41b3-9c07-7ce321c5e2a0\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-k579v" Feb 02 11:13:10 crc kubenswrapper[4925]: E0202 11:13:10.262628 4925 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 11:13:10 crc kubenswrapper[4925]: E0202 11:13:10.262678 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert podName:9b6aadaa-89ca-46f2-bf48-59726671b789 nodeName:}" failed. No retries permitted until 2026-02-02 11:13:11.262662571 +0000 UTC m=+968.266911533 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert") pod "infra-operator-controller-manager-79955696d6-m9rb5" (UID: "9b6aadaa-89ca-46f2-bf48-59726671b789") : secret "infra-operator-webhook-server-cert" not found Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.289777 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bfkmp" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.290245 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zksqs" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.319340 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-gbm72"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.321048 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-gbm72" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.323252 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-ww589" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.338888 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-gbm72"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.361216 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-zrg4p" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.363903 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx2gd\" (UniqueName: \"kubernetes.io/projected/fc69d485-23dc-4c0c-88ef-9fc6729d977d-kube-api-access-vx2gd\") pod \"swift-operator-controller-manager-68fc8c869-4lhnh\" (UID: \"fc69d485-23dc-4c0c-88ef-9fc6729d977d\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-4lhnh" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.363954 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98fwv\" (UniqueName: \"kubernetes.io/projected/07bdcdf5-a330-4524-9695-d089c2fbd4ae-kube-api-access-98fwv\") pod \"test-operator-controller-manager-56f8bfcd9f-8mpnq\" (UID: \"07bdcdf5-a330-4524-9695-d089c2fbd4ae\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-8mpnq" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.364040 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5f7n\" (UniqueName: \"kubernetes.io/projected/ae37dc52-0e8c-41b3-9c07-7ce321c5e2a0-kube-api-access-r5f7n\") pod \"telemetry-operator-controller-manager-64b5b76f97-k579v\" (UID: \"ae37dc52-0e8c-41b3-9c07-7ce321c5e2a0\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-k579v" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.454791 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-5rz7t" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.463227 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5f7n\" (UniqueName: \"kubernetes.io/projected/ae37dc52-0e8c-41b3-9c07-7ce321c5e2a0-kube-api-access-r5f7n\") pod \"telemetry-operator-controller-manager-64b5b76f97-k579v\" (UID: \"ae37dc52-0e8c-41b3-9c07-7ce321c5e2a0\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-k579v" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.486121 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx2gd\" (UniqueName: \"kubernetes.io/projected/fc69d485-23dc-4c0c-88ef-9fc6729d977d-kube-api-access-vx2gd\") pod \"swift-operator-controller-manager-68fc8c869-4lhnh\" (UID: \"fc69d485-23dc-4c0c-88ef-9fc6729d977d\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-4lhnh" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.511649 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98fwv\" (UniqueName: \"kubernetes.io/projected/07bdcdf5-a330-4524-9695-d089c2fbd4ae-kube-api-access-98fwv\") pod \"test-operator-controller-manager-56f8bfcd9f-8mpnq\" (UID: \"07bdcdf5-a330-4524-9695-d089c2fbd4ae\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-8mpnq" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.511917 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fctxv\" (UniqueName: \"kubernetes.io/projected/2ce3d469-8592-45c6-aba0-f1a607694c6d-kube-api-access-fctxv\") pod \"watcher-operator-controller-manager-564965969-gbm72\" (UID: \"2ce3d469-8592-45c6-aba0-f1a607694c6d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-gbm72" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.535866 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-k579v" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.590722 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.622020 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.628850 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fctxv\" (UniqueName: \"kubernetes.io/projected/2ce3d469-8592-45c6-aba0-f1a607694c6d-kube-api-access-fctxv\") pod \"watcher-operator-controller-manager-564965969-gbm72\" (UID: \"2ce3d469-8592-45c6-aba0-f1a607694c6d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-gbm72" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.638452 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.646951 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98fwv\" (UniqueName: \"kubernetes.io/projected/07bdcdf5-a330-4524-9695-d089c2fbd4ae-kube-api-access-98fwv\") pod \"test-operator-controller-manager-56f8bfcd9f-8mpnq\" (UID: \"07bdcdf5-a330-4524-9695-d089c2fbd4ae\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-8mpnq" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.661569 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.664611 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.665809 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xh4df" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.675686 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fctxv\" (UniqueName: \"kubernetes.io/projected/2ce3d469-8592-45c6-aba0-f1a607694c6d-kube-api-access-fctxv\") pod \"watcher-operator-controller-manager-564965969-gbm72\" (UID: \"2ce3d469-8592-45c6-aba0-f1a607694c6d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-gbm72" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.714924 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39421b95-3cc2-4396-bb80-a12f4bc41792" path="/var/lib/kubelet/pods/39421b95-3cc2-4396-bb80-a12f4bc41792/volumes" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.715765 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5qxgq"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.728322 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vw6m6"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.729292 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vw6m6" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.731346 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vw6m6"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.731751 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-v9zxc" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.750686 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.750788 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.752432 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl9mh\" (UniqueName: \"kubernetes.io/projected/21d85aaf-29ca-4cc9-8831-bb5691bc29d9-kube-api-access-hl9mh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vw6m6\" (UID: \"21d85aaf-29ca-4cc9-8831-bb5691bc29d9\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vw6m6" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.752527 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8\" (UID: \"a4e64115-b62c-421f-8072-88fc52eef59e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.752622 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87h6d\" (UniqueName: \"kubernetes.io/projected/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-kube-api-access-87h6d\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:10 crc kubenswrapper[4925]: E0202 11:13:10.752638 4925 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 11:13:10 crc kubenswrapper[4925]: E0202 11:13:10.752692 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert podName:a4e64115-b62c-421f-8072-88fc52eef59e nodeName:}" failed. No retries permitted until 2026-02-02 11:13:11.752675672 +0000 UTC m=+968.756924634 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" (UID: "a4e64115-b62c-421f-8072-88fc52eef59e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.761649 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-56b8d567c6-9sb76"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.764172 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-zvg88" event={"ID":"6f64f1b5-8b8f-48b6-934c-5d148565b151","Type":"ContainerStarted","Data":"e2e1d460100365c0c6479d914883529e143b0a823a8b49ea12240af7adf28a8c"} Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.767532 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5qxgq" event={"ID":"271532e8-0b2a-40bc-b982-56e6c0c706dc","Type":"ContainerStarted","Data":"ff46726e7b9003e8ec39b63432a9164fad20b937ef54cd19e21227a7290a0f76"} Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.769287 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-56b8d567c6-9sb76" event={"ID":"057f6b87-28a7-46c6-8d51-c32937d77a6a","Type":"ContainerStarted","Data":"311d461db7cddfcd5f5d3c6d9cdf1b99309660054b8c1d3dbb62971d387195b8"} Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.771808 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-4lhnh" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.778678 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-zvg88"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.854479 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.854538 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.854640 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl9mh\" (UniqueName: \"kubernetes.io/projected/21d85aaf-29ca-4cc9-8831-bb5691bc29d9-kube-api-access-hl9mh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vw6m6\" (UID: \"21d85aaf-29ca-4cc9-8831-bb5691bc29d9\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vw6m6" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.854694 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87h6d\" (UniqueName: \"kubernetes.io/projected/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-kube-api-access-87h6d\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:10 crc kubenswrapper[4925]: E0202 11:13:10.855685 4925 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 11:13:10 crc kubenswrapper[4925]: E0202 11:13:10.855751 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs podName:7112b3b6-a74c-4a93-94a2-8cbdbfd960b0 nodeName:}" failed. No retries permitted until 2026-02-02 11:13:11.355720905 +0000 UTC m=+968.359969867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs") pod "openstack-operator-controller-manager-5d4f579c97-rrqkc" (UID: "7112b3b6-a74c-4a93-94a2-8cbdbfd960b0") : secret "webhook-server-cert" not found Feb 02 11:13:10 crc kubenswrapper[4925]: E0202 11:13:10.855927 4925 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 11:13:10 crc kubenswrapper[4925]: E0202 11:13:10.855956 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs podName:7112b3b6-a74c-4a93-94a2-8cbdbfd960b0 nodeName:}" failed. No retries permitted until 2026-02-02 11:13:11.355947301 +0000 UTC m=+968.360196263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs") pod "openstack-operator-controller-manager-5d4f579c97-rrqkc" (UID: "7112b3b6-a74c-4a93-94a2-8cbdbfd960b0") : secret "metrics-server-cert" not found Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.869379 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-8mpnq" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.888798 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl9mh\" (UniqueName: \"kubernetes.io/projected/21d85aaf-29ca-4cc9-8831-bb5691bc29d9-kube-api-access-hl9mh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vw6m6\" (UID: \"21d85aaf-29ca-4cc9-8831-bb5691bc29d9\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vw6m6" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.892732 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87h6d\" (UniqueName: \"kubernetes.io/projected/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-kube-api-access-87h6d\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.914702 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-kbc5t"] Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.954293 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-gbm72" Feb 02 11:13:10 crc kubenswrapper[4925]: I0202 11:13:10.982609 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vw6m6" Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.022965 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fgf8c"] Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.155122 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-wggcm"] Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.175127 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-mfxvn"] Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.190990 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-f9rbf"] Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.196634 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-swkbc"] Feb 02 11:13:11 crc kubenswrapper[4925]: W0202 11:13:11.216973 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6ccf8c1_dcaf_49c7_84d9_dada6d7fec73.slice/crio-80a204edbc5752305b6b083953efae41a504c6a57f042ed2361d866ef4df2bcd WatchSource:0}: Error finding container 80a204edbc5752305b6b083953efae41a504c6a57f042ed2361d866ef4df2bcd: Status 404 returned error can't find the container with id 80a204edbc5752305b6b083953efae41a504c6a57f042ed2361d866ef4df2bcd Feb 02 11:13:11 crc kubenswrapper[4925]: W0202 11:13:11.217954 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8405a39c_7526_47b8_93b8_b9bb03cb970b.slice/crio-fc1defbd292fcfe2b8ce6d95d5dc0159fae73383b609370bd0d3d5c6c90b6219 WatchSource:0}: Error finding container fc1defbd292fcfe2b8ce6d95d5dc0159fae73383b609370bd0d3d5c6c90b6219: Status 404 returned error can't find the container with id fc1defbd292fcfe2b8ce6d95d5dc0159fae73383b609370bd0d3d5c6c90b6219 Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.263009 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert\") pod \"infra-operator-controller-manager-79955696d6-m9rb5\" (UID: \"9b6aadaa-89ca-46f2-bf48-59726671b789\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" Feb 02 11:13:11 crc kubenswrapper[4925]: E0202 11:13:11.263055 4925 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 11:13:11 crc kubenswrapper[4925]: E0202 11:13:11.263174 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert podName:9b6aadaa-89ca-46f2-bf48-59726671b789 nodeName:}" failed. No retries permitted until 2026-02-02 11:13:13.263159018 +0000 UTC m=+970.267407980 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert") pod "infra-operator-controller-manager-79955696d6-m9rb5" (UID: "9b6aadaa-89ca-46f2-bf48-59726671b789") : secret "infra-operator-webhook-server-cert" not found Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.320515 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-zksqs"] Feb 02 11:13:11 crc kubenswrapper[4925]: W0202 11:13:11.326114 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85d89138_ff2c_4e69_bd55_bf6b2648d286.slice/crio-9b9c7efaac78fa3f3c5d0672cd9792c6a62e7c24815e3a17b92e3ad7f4843fd6 WatchSource:0}: Error finding container 9b9c7efaac78fa3f3c5d0672cd9792c6a62e7c24815e3a17b92e3ad7f4843fd6: Status 404 returned error can't find the container with id 9b9c7efaac78fa3f3c5d0672cd9792c6a62e7c24815e3a17b92e3ad7f4843fd6 Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.364986 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.365046 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:11 crc kubenswrapper[4925]: E0202 11:13:11.365250 4925 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 11:13:11 crc kubenswrapper[4925]: E0202 11:13:11.365317 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs podName:7112b3b6-a74c-4a93-94a2-8cbdbfd960b0 nodeName:}" failed. No retries permitted until 2026-02-02 11:13:12.365299477 +0000 UTC m=+969.369548439 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs") pod "openstack-operator-controller-manager-5d4f579c97-rrqkc" (UID: "7112b3b6-a74c-4a93-94a2-8cbdbfd960b0") : secret "metrics-server-cert" not found Feb 02 11:13:11 crc kubenswrapper[4925]: E0202 11:13:11.365344 4925 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 11:13:11 crc kubenswrapper[4925]: E0202 11:13:11.365434 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs podName:7112b3b6-a74c-4a93-94a2-8cbdbfd960b0 nodeName:}" failed. No retries permitted until 2026-02-02 11:13:12.36540508 +0000 UTC m=+969.369654042 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs") pod "openstack-operator-controller-manager-5d4f579c97-rrqkc" (UID: "7112b3b6-a74c-4a93-94a2-8cbdbfd960b0") : secret "webhook-server-cert" not found Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.428956 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-bfkmp"] Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.442199 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-zrg4p"] Feb 02 11:13:11 crc kubenswrapper[4925]: W0202 11:13:11.450792 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod252fe85c_1645_4a4b_bd66_efe5814e9b09.slice/crio-b13df1901d2453f1262fce9754d79c70e3762fb0381135c976ae40a645320be0 WatchSource:0}: Error finding container b13df1901d2453f1262fce9754d79c70e3762fb0381135c976ae40a645320be0: Status 404 returned error can't find the container with id b13df1901d2453f1262fce9754d79c70e3762fb0381135c976ae40a645320be0 Feb 02 11:13:11 crc kubenswrapper[4925]: W0202 11:13:11.452743 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88bf0458_e0ab_4b1b_ad4d_01e0f51780e8.slice/crio-f4b646c09af9c0a8fed7986a1840225985b08b00dd0d22074be6cf4cef5f5972 WatchSource:0}: Error finding container f4b646c09af9c0a8fed7986a1840225985b08b00dd0d22074be6cf4cef5f5972: Status 404 returned error can't find the container with id f4b646c09af9c0a8fed7986a1840225985b08b00dd0d22074be6cf4cef5f5972 Feb 02 11:13:11 crc kubenswrapper[4925]: W0202 11:13:11.656854 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae37dc52_0e8c_41b3_9c07_7ce321c5e2a0.slice/crio-ee403a5da2829dbf2718c7361fc28fe181c6fe64f28d78a64dc78903be3872f1 WatchSource:0}: Error finding container ee403a5da2829dbf2718c7361fc28fe181c6fe64f28d78a64dc78903be3872f1: Status 404 returned error can't find the container with id ee403a5da2829dbf2718c7361fc28fe181c6fe64f28d78a64dc78903be3872f1 Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.672596 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-8nf8m"] Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.682623 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-k579v"] Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.689885 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-gbm72"] Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.703763 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-v4m7x"] Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.717545 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-4lhnh"] Feb 02 11:13:11 crc kubenswrapper[4925]: E0202 11:13:11.721624 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dm99s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-5rz7t_openstack-operators(e11ef3f5-cbad-483b-a5a6-dedfb5ec556f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 11:13:11 crc kubenswrapper[4925]: E0202 11:13:11.722232 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vx2gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-4lhnh_openstack-operators(fc69d485-23dc-4c0c-88ef-9fc6729d977d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 11:13:11 crc kubenswrapper[4925]: E0202 11:13:11.722820 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-5rz7t" podUID="e11ef3f5-cbad-483b-a5a6-dedfb5ec556f" Feb 02 11:13:11 crc kubenswrapper[4925]: E0202 11:13:11.724362 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-4lhnh" podUID="fc69d485-23dc-4c0c-88ef-9fc6729d977d" Feb 02 11:13:11 crc kubenswrapper[4925]: E0202 11:13:11.726591 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-98fwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-8mpnq_openstack-operators(07bdcdf5-a330-4524-9695-d089c2fbd4ae): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.728630 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-8mpnq"] Feb 02 11:13:11 crc kubenswrapper[4925]: E0202 11:13:11.728764 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-8mpnq" podUID="07bdcdf5-a330-4524-9695-d089c2fbd4ae" Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.733524 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-5rz7t"] Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.776554 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8\" (UID: \"a4e64115-b62c-421f-8072-88fc52eef59e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" Feb 02 11:13:11 crc kubenswrapper[4925]: E0202 11:13:11.776722 4925 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 11:13:11 crc kubenswrapper[4925]: E0202 11:13:11.776821 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert podName:a4e64115-b62c-421f-8072-88fc52eef59e nodeName:}" failed. No retries permitted until 2026-02-02 11:13:13.776766828 +0000 UTC m=+970.781015790 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" (UID: "a4e64115-b62c-421f-8072-88fc52eef59e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.788107 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wggcm" event={"ID":"2670eaa9-d6c1-479d-98d1-9a86c0c09305","Type":"ContainerStarted","Data":"f95090b4152e527160d1200bb98d6eeb6a2fce1ca40b2309aad960d9186124c0"} Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.788175 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vw6m6"] Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.793426 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mfxvn" event={"ID":"8405a39c-7526-47b8-93b8-b9bb03cb970b","Type":"ContainerStarted","Data":"fc1defbd292fcfe2b8ce6d95d5dc0159fae73383b609370bd0d3d5c6c90b6219"} Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.796347 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-5rz7t" event={"ID":"e11ef3f5-cbad-483b-a5a6-dedfb5ec556f","Type":"ContainerStarted","Data":"ee56151714979ad2df9c4f2f259bea34d7c5acc5289a3406a921209dcc7e4160"} Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.804683 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bfkmp" event={"ID":"252fe85c-1645-4a4b-bd66-efe5814e9b09","Type":"ContainerStarted","Data":"b13df1901d2453f1262fce9754d79c70e3762fb0381135c976ae40a645320be0"} Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.814577 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-gbm72" event={"ID":"2ce3d469-8592-45c6-aba0-f1a607694c6d","Type":"ContainerStarted","Data":"e02c87bc074524cbe9c7b042b5a42d34347e762e981c8a4f9034e8d6b77b19c4"} Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.818012 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zksqs" event={"ID":"85d89138-ff2c-4e69-bd55-bf6b2648d286","Type":"ContainerStarted","Data":"9b9c7efaac78fa3f3c5d0672cd9792c6a62e7c24815e3a17b92e3ad7f4843fd6"} Feb 02 11:13:11 crc kubenswrapper[4925]: E0202 11:13:11.819130 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-5rz7t" podUID="e11ef3f5-cbad-483b-a5a6-dedfb5ec556f" Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.826967 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-f9rbf" event={"ID":"6db50ed1-76a9-48ad-b08e-07edd9d07421","Type":"ContainerStarted","Data":"bd6f3de6e7e8717646cd6ce33a620240ab3838707dfed7764a09da1bd02e8362"} Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.829030 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kbc5t" event={"ID":"a8a71810-ebcf-4908-8e41-73fdce287188","Type":"ContainerStarted","Data":"3f72eb161e90b3369a2e18a45f6b54a902f60d46fd703d9bd1fea41861c74572"} Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.831974 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fgf8c" event={"ID":"714728e3-dda9-47d3-aca5-c9bf8a13c2eb","Type":"ContainerStarted","Data":"00c8a5a6ac5b0bf7e9450dd9e3943151fec71fb9bdaf606514ff4680ec52f970"} Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.833888 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8nf8m" event={"ID":"7b8e50f8-9611-4be4-aa4e-a0834ec27a24","Type":"ContainerStarted","Data":"18e572a084d4968f1baa21b8ce0d3d905ab2710c42e5b8fc18e59b4d4874cd57"} Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.835658 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-k579v" event={"ID":"ae37dc52-0e8c-41b3-9c07-7ce321c5e2a0","Type":"ContainerStarted","Data":"ee403a5da2829dbf2718c7361fc28fe181c6fe64f28d78a64dc78903be3872f1"} Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.839227 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-v4m7x" event={"ID":"2d3514fc-34cd-4021-a4d9-662abe6bb56e","Type":"ContainerStarted","Data":"3fbd1565cc8953794f46729f7e7591c70ae7d933718ce39d2a6fa3150df06400"} Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.841025 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-zrg4p" event={"ID":"88bf0458-e0ab-4b1b-ad4d-01e0f51780e8","Type":"ContainerStarted","Data":"f4b646c09af9c0a8fed7986a1840225985b08b00dd0d22074be6cf4cef5f5972"} Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.845743 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-8mpnq" event={"ID":"07bdcdf5-a330-4524-9695-d089c2fbd4ae","Type":"ContainerStarted","Data":"92ffabdac79b5d54c09f606eb0f1b512fa04102fcb3bf8f892df1c89d7936b30"} Feb 02 11:13:11 crc kubenswrapper[4925]: E0202 11:13:11.847900 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-8mpnq" podUID="07bdcdf5-a330-4524-9695-d089c2fbd4ae" Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.857372 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-4lhnh" event={"ID":"fc69d485-23dc-4c0c-88ef-9fc6729d977d","Type":"ContainerStarted","Data":"0b50cdddcf64bf39770bb5be2cb3cfe85bea01f0f473817c478cf00a8f0fd4ee"} Feb 02 11:13:11 crc kubenswrapper[4925]: E0202 11:13:11.859267 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-4lhnh" podUID="fc69d485-23dc-4c0c-88ef-9fc6729d977d" Feb 02 11:13:11 crc kubenswrapper[4925]: I0202 11:13:11.859626 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-swkbc" event={"ID":"e6ccf8c1-dcaf-49c7-84d9-dada6d7fec73","Type":"ContainerStarted","Data":"80a204edbc5752305b6b083953efae41a504c6a57f042ed2361d866ef4df2bcd"} Feb 02 11:13:12 crc kubenswrapper[4925]: I0202 11:13:12.404218 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:12 crc kubenswrapper[4925]: I0202 11:13:12.404295 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:12 crc kubenswrapper[4925]: E0202 11:13:12.404517 4925 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 11:13:12 crc kubenswrapper[4925]: E0202 11:13:12.404599 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs podName:7112b3b6-a74c-4a93-94a2-8cbdbfd960b0 nodeName:}" failed. No retries permitted until 2026-02-02 11:13:14.404579138 +0000 UTC m=+971.408828100 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs") pod "openstack-operator-controller-manager-5d4f579c97-rrqkc" (UID: "7112b3b6-a74c-4a93-94a2-8cbdbfd960b0") : secret "metrics-server-cert" not found Feb 02 11:13:12 crc kubenswrapper[4925]: E0202 11:13:12.405113 4925 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 11:13:12 crc kubenswrapper[4925]: E0202 11:13:12.405157 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs podName:7112b3b6-a74c-4a93-94a2-8cbdbfd960b0 nodeName:}" failed. No retries permitted until 2026-02-02 11:13:14.405145643 +0000 UTC m=+971.409394605 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs") pod "openstack-operator-controller-manager-5d4f579c97-rrqkc" (UID: "7112b3b6-a74c-4a93-94a2-8cbdbfd960b0") : secret "webhook-server-cert" not found Feb 02 11:13:12 crc kubenswrapper[4925]: I0202 11:13:12.826825 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pgvfn"] Feb 02 11:13:12 crc kubenswrapper[4925]: I0202 11:13:12.827208 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pgvfn" podUID="46c8448f-d7a2-473d-9694-402273d86fc9" containerName="registry-server" containerID="cri-o://d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f" gracePeriod=2 Feb 02 11:13:12 crc kubenswrapper[4925]: I0202 11:13:12.906165 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vw6m6" event={"ID":"21d85aaf-29ca-4cc9-8831-bb5691bc29d9","Type":"ContainerStarted","Data":"5c688263a4e45ad295551cd8778faa75c251a402378a8f4d0138c9ee228c7b4c"} Feb 02 11:13:12 crc kubenswrapper[4925]: E0202 11:13:12.907774 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-5rz7t" podUID="e11ef3f5-cbad-483b-a5a6-dedfb5ec556f" Feb 02 11:13:12 crc kubenswrapper[4925]: E0202 11:13:12.908491 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-8mpnq" podUID="07bdcdf5-a330-4524-9695-d089c2fbd4ae" Feb 02 11:13:12 crc kubenswrapper[4925]: E0202 11:13:12.912190 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-4lhnh" podUID="fc69d485-23dc-4c0c-88ef-9fc6729d977d" Feb 02 11:13:13 crc kubenswrapper[4925]: I0202 11:13:13.318616 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert\") pod \"infra-operator-controller-manager-79955696d6-m9rb5\" (UID: \"9b6aadaa-89ca-46f2-bf48-59726671b789\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" Feb 02 11:13:13 crc kubenswrapper[4925]: E0202 11:13:13.318813 4925 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 11:13:13 crc kubenswrapper[4925]: E0202 11:13:13.319064 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert podName:9b6aadaa-89ca-46f2-bf48-59726671b789 nodeName:}" failed. No retries permitted until 2026-02-02 11:13:17.319044587 +0000 UTC m=+974.323293549 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert") pod "infra-operator-controller-manager-79955696d6-m9rb5" (UID: "9b6aadaa-89ca-46f2-bf48-59726671b789") : secret "infra-operator-webhook-server-cert" not found Feb 02 11:13:13 crc kubenswrapper[4925]: I0202 11:13:13.398306 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:13:13 crc kubenswrapper[4925]: I0202 11:13:13.398367 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:13:13 crc kubenswrapper[4925]: I0202 11:13:13.398417 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 11:13:13 crc kubenswrapper[4925]: I0202 11:13:13.399280 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4b03a1975ff91abe6f92e545f0ab1b94a8a292e0264c3f7e53cacd130fa2f25b"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:13:13 crc kubenswrapper[4925]: I0202 11:13:13.399345 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://4b03a1975ff91abe6f92e545f0ab1b94a8a292e0264c3f7e53cacd130fa2f25b" gracePeriod=600 Feb 02 11:13:13 crc kubenswrapper[4925]: I0202 11:13:13.828897 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8\" (UID: \"a4e64115-b62c-421f-8072-88fc52eef59e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" Feb 02 11:13:13 crc kubenswrapper[4925]: E0202 11:13:13.829103 4925 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 11:13:13 crc kubenswrapper[4925]: E0202 11:13:13.829170 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert podName:a4e64115-b62c-421f-8072-88fc52eef59e nodeName:}" failed. No retries permitted until 2026-02-02 11:13:17.829152424 +0000 UTC m=+974.833401376 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" (UID: "a4e64115-b62c-421f-8072-88fc52eef59e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 11:13:13 crc kubenswrapper[4925]: I0202 11:13:13.923117 4925 generic.go:334] "Generic (PLEG): container finished" podID="46c8448f-d7a2-473d-9694-402273d86fc9" containerID="d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f" exitCode=0 Feb 02 11:13:13 crc kubenswrapper[4925]: I0202 11:13:13.923266 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgvfn" event={"ID":"46c8448f-d7a2-473d-9694-402273d86fc9","Type":"ContainerDied","Data":"d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f"} Feb 02 11:13:13 crc kubenswrapper[4925]: I0202 11:13:13.928274 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="4b03a1975ff91abe6f92e545f0ab1b94a8a292e0264c3f7e53cacd130fa2f25b" exitCode=0 Feb 02 11:13:13 crc kubenswrapper[4925]: I0202 11:13:13.928330 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"4b03a1975ff91abe6f92e545f0ab1b94a8a292e0264c3f7e53cacd130fa2f25b"} Feb 02 11:13:13 crc kubenswrapper[4925]: I0202 11:13:13.928378 4925 scope.go:117] "RemoveContainer" containerID="ffca907841f0a5bec449b7e08e60cef6f7cea31a8df22b28332865ae60f507bc" Feb 02 11:13:14 crc kubenswrapper[4925]: I0202 11:13:14.436132 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:14 crc kubenswrapper[4925]: I0202 11:13:14.436200 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:14 crc kubenswrapper[4925]: E0202 11:13:14.436374 4925 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 11:13:14 crc kubenswrapper[4925]: E0202 11:13:14.436445 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs podName:7112b3b6-a74c-4a93-94a2-8cbdbfd960b0 nodeName:}" failed. No retries permitted until 2026-02-02 11:13:18.436425296 +0000 UTC m=+975.440674258 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs") pod "openstack-operator-controller-manager-5d4f579c97-rrqkc" (UID: "7112b3b6-a74c-4a93-94a2-8cbdbfd960b0") : secret "metrics-server-cert" not found Feb 02 11:13:14 crc kubenswrapper[4925]: E0202 11:13:14.436902 4925 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 11:13:14 crc kubenswrapper[4925]: E0202 11:13:14.436934 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs podName:7112b3b6-a74c-4a93-94a2-8cbdbfd960b0 nodeName:}" failed. No retries permitted until 2026-02-02 11:13:18.43692341 +0000 UTC m=+975.441172372 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs") pod "openstack-operator-controller-manager-5d4f579c97-rrqkc" (UID: "7112b3b6-a74c-4a93-94a2-8cbdbfd960b0") : secret "webhook-server-cert" not found Feb 02 11:13:17 crc kubenswrapper[4925]: E0202 11:13:17.366283 4925 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f is running failed: container process not found" containerID="d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 11:13:17 crc kubenswrapper[4925]: E0202 11:13:17.367160 4925 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f is running failed: container process not found" containerID="d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 11:13:17 crc kubenswrapper[4925]: E0202 11:13:17.367871 4925 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f is running failed: container process not found" containerID="d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 11:13:17 crc kubenswrapper[4925]: E0202 11:13:17.367949 4925 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-pgvfn" podUID="46c8448f-d7a2-473d-9694-402273d86fc9" containerName="registry-server" Feb 02 11:13:17 crc kubenswrapper[4925]: I0202 11:13:17.384748 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert\") pod \"infra-operator-controller-manager-79955696d6-m9rb5\" (UID: \"9b6aadaa-89ca-46f2-bf48-59726671b789\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" Feb 02 11:13:17 crc kubenswrapper[4925]: E0202 11:13:17.384879 4925 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 11:13:17 crc kubenswrapper[4925]: E0202 11:13:17.384957 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert podName:9b6aadaa-89ca-46f2-bf48-59726671b789 nodeName:}" failed. No retries permitted until 2026-02-02 11:13:25.384935987 +0000 UTC m=+982.389184949 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert") pod "infra-operator-controller-manager-79955696d6-m9rb5" (UID: "9b6aadaa-89ca-46f2-bf48-59726671b789") : secret "infra-operator-webhook-server-cert" not found Feb 02 11:13:17 crc kubenswrapper[4925]: I0202 11:13:17.891330 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8\" (UID: \"a4e64115-b62c-421f-8072-88fc52eef59e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" Feb 02 11:13:17 crc kubenswrapper[4925]: E0202 11:13:17.891532 4925 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 11:13:17 crc kubenswrapper[4925]: E0202 11:13:17.891591 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert podName:a4e64115-b62c-421f-8072-88fc52eef59e nodeName:}" failed. No retries permitted until 2026-02-02 11:13:25.891576073 +0000 UTC m=+982.895825035 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" (UID: "a4e64115-b62c-421f-8072-88fc52eef59e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 11:13:18 crc kubenswrapper[4925]: I0202 11:13:18.498348 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:18 crc kubenswrapper[4925]: I0202 11:13:18.498410 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:18 crc kubenswrapper[4925]: E0202 11:13:18.498594 4925 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 11:13:18 crc kubenswrapper[4925]: E0202 11:13:18.498649 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs podName:7112b3b6-a74c-4a93-94a2-8cbdbfd960b0 nodeName:}" failed. No retries permitted until 2026-02-02 11:13:26.498634 +0000 UTC m=+983.502882952 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs") pod "openstack-operator-controller-manager-5d4f579c97-rrqkc" (UID: "7112b3b6-a74c-4a93-94a2-8cbdbfd960b0") : secret "metrics-server-cert" not found Feb 02 11:13:18 crc kubenswrapper[4925]: E0202 11:13:18.498649 4925 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 11:13:18 crc kubenswrapper[4925]: E0202 11:13:18.498814 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs podName:7112b3b6-a74c-4a93-94a2-8cbdbfd960b0 nodeName:}" failed. No retries permitted until 2026-02-02 11:13:26.498786134 +0000 UTC m=+983.503035156 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs") pod "openstack-operator-controller-manager-5d4f579c97-rrqkc" (UID: "7112b3b6-a74c-4a93-94a2-8cbdbfd960b0") : secret "webhook-server-cert" not found Feb 02 11:13:25 crc kubenswrapper[4925]: I0202 11:13:25.414560 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert\") pod \"infra-operator-controller-manager-79955696d6-m9rb5\" (UID: \"9b6aadaa-89ca-46f2-bf48-59726671b789\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" Feb 02 11:13:25 crc kubenswrapper[4925]: I0202 11:13:25.420813 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b6aadaa-89ca-46f2-bf48-59726671b789-cert\") pod \"infra-operator-controller-manager-79955696d6-m9rb5\" (UID: \"9b6aadaa-89ca-46f2-bf48-59726671b789\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" Feb 02 11:13:25 crc kubenswrapper[4925]: I0202 11:13:25.444749 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" Feb 02 11:13:25 crc kubenswrapper[4925]: I0202 11:13:25.664778 4925 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:13:25 crc kubenswrapper[4925]: I0202 11:13:25.922687 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8\" (UID: \"a4e64115-b62c-421f-8072-88fc52eef59e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" Feb 02 11:13:25 crc kubenswrapper[4925]: E0202 11:13:25.922837 4925 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 11:13:25 crc kubenswrapper[4925]: E0202 11:13:25.922910 4925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert podName:a4e64115-b62c-421f-8072-88fc52eef59e nodeName:}" failed. No retries permitted until 2026-02-02 11:13:41.922886223 +0000 UTC m=+998.927135185 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" (UID: "a4e64115-b62c-421f-8072-88fc52eef59e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 11:13:26 crc kubenswrapper[4925]: I0202 11:13:26.531440 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:26 crc kubenswrapper[4925]: I0202 11:13:26.531503 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:26 crc kubenswrapper[4925]: I0202 11:13:26.535566 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-webhook-certs\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:26 crc kubenswrapper[4925]: I0202 11:13:26.535736 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7112b3b6-a74c-4a93-94a2-8cbdbfd960b0-metrics-certs\") pod \"openstack-operator-controller-manager-5d4f579c97-rrqkc\" (UID: \"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0\") " pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:26 crc kubenswrapper[4925]: I0202 11:13:26.564529 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:27 crc kubenswrapper[4925]: E0202 11:13:27.366318 4925 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f is running failed: container process not found" containerID="d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 11:13:27 crc kubenswrapper[4925]: E0202 11:13:27.366812 4925 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f is running failed: container process not found" containerID="d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 11:13:27 crc kubenswrapper[4925]: E0202 11:13:27.367177 4925 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f is running failed: container process not found" containerID="d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 11:13:27 crc kubenswrapper[4925]: E0202 11:13:27.367227 4925 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-pgvfn" podUID="46c8448f-d7a2-473d-9694-402273d86fc9" containerName="registry-server" Feb 02 11:13:32 crc kubenswrapper[4925]: E0202 11:13:32.622192 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Feb 02 11:13:32 crc kubenswrapper[4925]: E0202 11:13:32.622644 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pjgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-mfxvn_openstack-operators(8405a39c-7526-47b8-93b8-b9bb03cb970b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:13:32 crc kubenswrapper[4925]: E0202 11:13:32.623860 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mfxvn" podUID="8405a39c-7526-47b8-93b8-b9bb03cb970b" Feb 02 11:13:33 crc kubenswrapper[4925]: E0202 11:13:33.087454 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mfxvn" podUID="8405a39c-7526-47b8-93b8-b9bb03cb970b" Feb 02 11:13:33 crc kubenswrapper[4925]: E0202 11:13:33.990851 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Feb 02 11:13:33 crc kubenswrapper[4925]: E0202 11:13:33.991346 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-krmnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-kbc5t_openstack-operators(a8a71810-ebcf-4908-8e41-73fdce287188): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:13:33 crc kubenswrapper[4925]: E0202 11:13:33.992612 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kbc5t" podUID="a8a71810-ebcf-4908-8e41-73fdce287188" Feb 02 11:13:34 crc kubenswrapper[4925]: E0202 11:13:34.089933 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Feb 02 11:13:34 crc kubenswrapper[4925]: E0202 11:13:34.090286 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q7bsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-zrg4p_openstack-operators(88bf0458-e0ab-4b1b-ad4d-01e0f51780e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:13:34 crc kubenswrapper[4925]: E0202 11:13:34.091530 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-zrg4p" podUID="88bf0458-e0ab-4b1b-ad4d-01e0f51780e8" Feb 02 11:13:34 crc kubenswrapper[4925]: E0202 11:13:34.095108 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kbc5t" podUID="a8a71810-ebcf-4908-8e41-73fdce287188" Feb 02 11:13:35 crc kubenswrapper[4925]: E0202 11:13:35.101483 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-zrg4p" podUID="88bf0458-e0ab-4b1b-ad4d-01e0f51780e8" Feb 02 11:13:35 crc kubenswrapper[4925]: E0202 11:13:35.325142 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Feb 02 11:13:35 crc kubenswrapper[4925]: E0202 11:13:35.325653 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c28q8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-f9rbf_openstack-operators(6db50ed1-76a9-48ad-b08e-07edd9d07421): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:13:35 crc kubenswrapper[4925]: E0202 11:13:35.327067 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-f9rbf" podUID="6db50ed1-76a9-48ad-b08e-07edd9d07421" Feb 02 11:13:35 crc kubenswrapper[4925]: E0202 11:13:35.795507 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Feb 02 11:13:35 crc kubenswrapper[4925]: E0202 11:13:35.795958 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-79qdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-8nf8m_openstack-operators(7b8e50f8-9611-4be4-aa4e-a0834ec27a24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:13:35 crc kubenswrapper[4925]: E0202 11:13:35.797238 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8nf8m" podUID="7b8e50f8-9611-4be4-aa4e-a0834ec27a24" Feb 02 11:13:36 crc kubenswrapper[4925]: E0202 11:13:36.108333 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-f9rbf" podUID="6db50ed1-76a9-48ad-b08e-07edd9d07421" Feb 02 11:13:36 crc kubenswrapper[4925]: E0202 11:13:36.108373 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8nf8m" podUID="7b8e50f8-9611-4be4-aa4e-a0834ec27a24" Feb 02 11:13:36 crc kubenswrapper[4925]: E0202 11:13:36.667398 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Feb 02 11:13:36 crc kubenswrapper[4925]: E0202 11:13:36.667745 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fctxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-gbm72_openstack-operators(2ce3d469-8592-45c6-aba0-f1a607694c6d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:13:36 crc kubenswrapper[4925]: E0202 11:13:36.669069 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-gbm72" podUID="2ce3d469-8592-45c6-aba0-f1a607694c6d" Feb 02 11:13:37 crc kubenswrapper[4925]: E0202 11:13:37.114243 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-gbm72" podUID="2ce3d469-8592-45c6-aba0-f1a607694c6d" Feb 02 11:13:37 crc kubenswrapper[4925]: E0202 11:13:37.366471 4925 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f is running failed: container process not found" containerID="d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 11:13:37 crc kubenswrapper[4925]: E0202 11:13:37.367188 4925 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f is running failed: container process not found" containerID="d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 11:13:37 crc kubenswrapper[4925]: E0202 11:13:37.367797 4925 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f is running failed: container process not found" containerID="d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 11:13:37 crc kubenswrapper[4925]: E0202 11:13:37.367841 4925 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-pgvfn" podUID="46c8448f-d7a2-473d-9694-402273d86fc9" containerName="registry-server" Feb 02 11:13:39 crc kubenswrapper[4925]: E0202 11:13:39.751656 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a" Feb 02 11:13:39 crc kubenswrapper[4925]: E0202 11:13:39.752222 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r5f7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-k579v_openstack-operators(ae37dc52-0e8c-41b3-9c07-7ce321c5e2a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:13:39 crc kubenswrapper[4925]: E0202 11:13:39.753400 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-k579v" podUID="ae37dc52-0e8c-41b3-9c07-7ce321c5e2a0" Feb 02 11:13:40 crc kubenswrapper[4925]: E0202 11:13:40.132792 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-k579v" podUID="ae37dc52-0e8c-41b3-9c07-7ce321c5e2a0" Feb 02 11:13:40 crc kubenswrapper[4925]: I0202 11:13:40.446767 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgvfn" Feb 02 11:13:40 crc kubenswrapper[4925]: E0202 11:13:40.509135 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Feb 02 11:13:40 crc kubenswrapper[4925]: E0202 11:13:40.509443 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lxnjh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-bfkmp_openstack-operators(252fe85c-1645-4a4b-bd66-efe5814e9b09): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:13:40 crc kubenswrapper[4925]: E0202 11:13:40.510627 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bfkmp" podUID="252fe85c-1645-4a4b-bd66-efe5814e9b09" Feb 02 11:13:40 crc kubenswrapper[4925]: I0202 11:13:40.529606 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c8448f-d7a2-473d-9694-402273d86fc9-catalog-content\") pod \"46c8448f-d7a2-473d-9694-402273d86fc9\" (UID: \"46c8448f-d7a2-473d-9694-402273d86fc9\") " Feb 02 11:13:40 crc kubenswrapper[4925]: I0202 11:13:40.529707 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c8448f-d7a2-473d-9694-402273d86fc9-utilities\") pod \"46c8448f-d7a2-473d-9694-402273d86fc9\" (UID: \"46c8448f-d7a2-473d-9694-402273d86fc9\") " Feb 02 11:13:40 crc kubenswrapper[4925]: I0202 11:13:40.529744 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6wzv\" (UniqueName: \"kubernetes.io/projected/46c8448f-d7a2-473d-9694-402273d86fc9-kube-api-access-k6wzv\") pod \"46c8448f-d7a2-473d-9694-402273d86fc9\" (UID: \"46c8448f-d7a2-473d-9694-402273d86fc9\") " Feb 02 11:13:40 crc kubenswrapper[4925]: I0202 11:13:40.531453 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46c8448f-d7a2-473d-9694-402273d86fc9-utilities" (OuterVolumeSpecName: "utilities") pod "46c8448f-d7a2-473d-9694-402273d86fc9" (UID: "46c8448f-d7a2-473d-9694-402273d86fc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:13:40 crc kubenswrapper[4925]: I0202 11:13:40.536656 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46c8448f-d7a2-473d-9694-402273d86fc9-kube-api-access-k6wzv" (OuterVolumeSpecName: "kube-api-access-k6wzv") pod "46c8448f-d7a2-473d-9694-402273d86fc9" (UID: "46c8448f-d7a2-473d-9694-402273d86fc9"). InnerVolumeSpecName "kube-api-access-k6wzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:13:40 crc kubenswrapper[4925]: I0202 11:13:40.584884 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46c8448f-d7a2-473d-9694-402273d86fc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46c8448f-d7a2-473d-9694-402273d86fc9" (UID: "46c8448f-d7a2-473d-9694-402273d86fc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:13:40 crc kubenswrapper[4925]: I0202 11:13:40.631955 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c8448f-d7a2-473d-9694-402273d86fc9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:13:40 crc kubenswrapper[4925]: I0202 11:13:40.632014 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c8448f-d7a2-473d-9694-402273d86fc9-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:13:40 crc kubenswrapper[4925]: I0202 11:13:40.632033 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6wzv\" (UniqueName: \"kubernetes.io/projected/46c8448f-d7a2-473d-9694-402273d86fc9-kube-api-access-k6wzv\") on node \"crc\" DevicePath \"\"" Feb 02 11:13:41 crc kubenswrapper[4925]: I0202 11:13:41.141158 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgvfn" Feb 02 11:13:41 crc kubenswrapper[4925]: I0202 11:13:41.141129 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgvfn" event={"ID":"46c8448f-d7a2-473d-9694-402273d86fc9","Type":"ContainerDied","Data":"ded248fd3b34902afc1855b7cb4c3f7b70b4e6e5d46791754e70b0b3fc51719d"} Feb 02 11:13:41 crc kubenswrapper[4925]: I0202 11:13:41.141364 4925 scope.go:117] "RemoveContainer" containerID="d4ae8f0f519f1899546324d67d4eef51dee57553b0a253473cb42ffe4359925f" Feb 02 11:13:41 crc kubenswrapper[4925]: E0202 11:13:41.144826 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bfkmp" podUID="252fe85c-1645-4a4b-bd66-efe5814e9b09" Feb 02 11:13:41 crc kubenswrapper[4925]: I0202 11:13:41.186759 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pgvfn"] Feb 02 11:13:41 crc kubenswrapper[4925]: I0202 11:13:41.194761 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pgvfn"] Feb 02 11:13:41 crc kubenswrapper[4925]: I0202 11:13:41.949018 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8\" (UID: \"a4e64115-b62c-421f-8072-88fc52eef59e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" Feb 02 11:13:41 crc kubenswrapper[4925]: I0202 11:13:41.955044 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4e64115-b62c-421f-8072-88fc52eef59e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8\" (UID: \"a4e64115-b62c-421f-8072-88fc52eef59e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" Feb 02 11:13:42 crc kubenswrapper[4925]: I0202 11:13:42.120041 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" Feb 02 11:13:42 crc kubenswrapper[4925]: I0202 11:13:42.691953 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46c8448f-d7a2-473d-9694-402273d86fc9" path="/var/lib/kubelet/pods/46c8448f-d7a2-473d-9694-402273d86fc9/volumes" Feb 02 11:13:45 crc kubenswrapper[4925]: E0202 11:13:45.135801 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 02 11:13:45 crc kubenswrapper[4925]: E0202 11:13:45.136398 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hl9mh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vw6m6_openstack-operators(21d85aaf-29ca-4cc9-8831-bb5691bc29d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:13:45 crc kubenswrapper[4925]: E0202 11:13:45.137624 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vw6m6" podUID="21d85aaf-29ca-4cc9-8831-bb5691bc29d9" Feb 02 11:13:45 crc kubenswrapper[4925]: E0202 11:13:45.295744 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vw6m6" podUID="21d85aaf-29ca-4cc9-8831-bb5691bc29d9" Feb 02 11:13:45 crc kubenswrapper[4925]: I0202 11:13:45.454313 4925 scope.go:117] "RemoveContainer" containerID="3beb8b23835fdc850ead2f4d106e042cc52a493089ed2e391a8a4f16b9b1ec13" Feb 02 11:13:45 crc kubenswrapper[4925]: I0202 11:13:45.629212 4925 scope.go:117] "RemoveContainer" containerID="df2a7b76927888fbac040ef82812950e5f4432bedb3f6e0740ebb5753d8fc6fd" Feb 02 11:13:45 crc kubenswrapper[4925]: I0202 11:13:45.873966 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5"] Feb 02 11:13:45 crc kubenswrapper[4925]: I0202 11:13:45.918622 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8"] Feb 02 11:13:45 crc kubenswrapper[4925]: W0202 11:13:45.971973 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b6aadaa_89ca_46f2_bf48_59726671b789.slice/crio-559677b0ea0570d90455844cee17bdf96630f170a24b7ff28cafca6f7c4ae255 WatchSource:0}: Error finding container 559677b0ea0570d90455844cee17bdf96630f170a24b7ff28cafca6f7c4ae255: Status 404 returned error can't find the container with id 559677b0ea0570d90455844cee17bdf96630f170a24b7ff28cafca6f7c4ae255 Feb 02 11:13:45 crc kubenswrapper[4925]: I0202 11:13:45.974946 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc"] Feb 02 11:13:46 crc kubenswrapper[4925]: I0202 11:13:46.197159 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zksqs" event={"ID":"85d89138-ff2c-4e69-bd55-bf6b2648d286","Type":"ContainerStarted","Data":"91ce000822a5d260653691b3b67af53b606921c8f7f6ee1351aad5aaa8cbe609"} Feb 02 11:13:46 crc kubenswrapper[4925]: I0202 11:13:46.197540 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zksqs" Feb 02 11:13:46 crc kubenswrapper[4925]: I0202 11:13:46.204110 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wggcm" event={"ID":"2670eaa9-d6c1-479d-98d1-9a86c0c09305","Type":"ContainerStarted","Data":"b5205d2e83424c4398fb3813aafd0cce219ba40946bc38488d3217ca631a0e04"} Feb 02 11:13:46 crc kubenswrapper[4925]: I0202 11:13:46.204288 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wggcm" Feb 02 11:13:46 crc kubenswrapper[4925]: I0202 11:13:46.228820 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"dc20c1950a2aee33db5a561db4bbc78e34cfd4881473af054b6cd76fb628d232"} Feb 02 11:13:46 crc kubenswrapper[4925]: I0202 11:13:46.236388 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-56b8d567c6-9sb76" event={"ID":"057f6b87-28a7-46c6-8d51-c32937d77a6a","Type":"ContainerStarted","Data":"9105ce8e563f02f6417882c9db30d982d5d24d4738c9b3993ec404e4731cd102"} Feb 02 11:13:46 crc kubenswrapper[4925]: I0202 11:13:46.237259 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-56b8d567c6-9sb76" Feb 02 11:13:46 crc kubenswrapper[4925]: I0202 11:13:46.241235 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wggcm" podStartSLOduration=9.365411218 podStartE2EDuration="37.241211321s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:11.174641642 +0000 UTC m=+968.178890604" lastFinishedPulling="2026-02-02 11:13:39.050441745 +0000 UTC m=+996.054690707" observedRunningTime="2026-02-02 11:13:46.240548503 +0000 UTC m=+1003.244797475" watchObservedRunningTime="2026-02-02 11:13:46.241211321 +0000 UTC m=+1003.245460283" Feb 02 11:13:46 crc kubenswrapper[4925]: I0202 11:13:46.249230 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zksqs" podStartSLOduration=8.252910792 podStartE2EDuration="37.249208185s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:11.32899049 +0000 UTC m=+968.333239452" lastFinishedPulling="2026-02-02 11:13:40.325287883 +0000 UTC m=+997.329536845" observedRunningTime="2026-02-02 11:13:46.220858257 +0000 UTC m=+1003.225107229" watchObservedRunningTime="2026-02-02 11:13:46.249208185 +0000 UTC m=+1003.253457147" Feb 02 11:13:46 crc kubenswrapper[4925]: I0202 11:13:46.251653 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" event={"ID":"a4e64115-b62c-421f-8072-88fc52eef59e","Type":"ContainerStarted","Data":"f58ad91fb3eaa036f1dd82773f849fa5e7ed779591b38523c2f20c6501cc78d2"} Feb 02 11:13:46 crc kubenswrapper[4925]: I0202 11:13:46.260291 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5qxgq" event={"ID":"271532e8-0b2a-40bc-b982-56e6c0c706dc","Type":"ContainerStarted","Data":"9a2e277aae59a04fbb34766fd1f1a5218b36d0351e6469bfbf64402179d44c2c"} Feb 02 11:13:46 crc kubenswrapper[4925]: I0202 11:13:46.260401 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5qxgq" Feb 02 11:13:46 crc kubenswrapper[4925]: I0202 11:13:46.269105 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-56b8d567c6-9sb76" podStartSLOduration=11.193549289 podStartE2EDuration="37.269064465s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:10.583243063 +0000 UTC m=+967.587492025" lastFinishedPulling="2026-02-02 11:13:36.658758239 +0000 UTC m=+993.663007201" observedRunningTime="2026-02-02 11:13:46.262255613 +0000 UTC m=+1003.266504575" watchObservedRunningTime="2026-02-02 11:13:46.269064465 +0000 UTC m=+1003.273313427" Feb 02 11:13:46 crc kubenswrapper[4925]: I0202 11:13:46.285628 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" event={"ID":"9b6aadaa-89ca-46f2-bf48-59726671b789","Type":"ContainerStarted","Data":"559677b0ea0570d90455844cee17bdf96630f170a24b7ff28cafca6f7c4ae255"} Feb 02 11:13:46 crc kubenswrapper[4925]: I0202 11:13:46.338365 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" event={"ID":"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0","Type":"ContainerStarted","Data":"b409aafbc618e71bfdb2a241783be152605f4e31586c73304f31924473fff07d"} Feb 02 11:13:46 crc kubenswrapper[4925]: I0202 11:13:46.777401 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5qxgq" podStartSLOduration=7.771875643 podStartE2EDuration="37.777378465s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:10.319787431 +0000 UTC m=+967.324036393" lastFinishedPulling="2026-02-02 11:13:40.325290243 +0000 UTC m=+997.329539215" observedRunningTime="2026-02-02 11:13:46.318581168 +0000 UTC m=+1003.322830130" watchObservedRunningTime="2026-02-02 11:13:46.777378465 +0000 UTC m=+1003.781627427" Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.355687 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-swkbc" event={"ID":"e6ccf8c1-dcaf-49c7-84d9-dada6d7fec73","Type":"ContainerStarted","Data":"4cebd4f3d473d98b10949efd56fac40649ea8689c57a40ad2d12895ccc6bd0ba"} Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.356918 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-swkbc" Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.362543 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-5rz7t" event={"ID":"e11ef3f5-cbad-483b-a5a6-dedfb5ec556f","Type":"ContainerStarted","Data":"4f5020eae527bff2a8678ed53abb79873b70fc2fed55df4a98c119d901529ee8"} Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.363351 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-5rz7t" Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.381234 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" event={"ID":"7112b3b6-a74c-4a93-94a2-8cbdbfd960b0","Type":"ContainerStarted","Data":"4f25f161170e16e70051084bae7ae13559739b15bfa696ed04c8b8a365a3b8ee"} Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.381887 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.396434 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-v4m7x" event={"ID":"2d3514fc-34cd-4021-a4d9-662abe6bb56e","Type":"ContainerStarted","Data":"e353b17d1682685c8860322ee85273bf039e816a49dd04e73544605d31c25ce8"} Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.397147 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-v4m7x" Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.399259 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fgf8c" event={"ID":"714728e3-dda9-47d3-aca5-c9bf8a13c2eb","Type":"ContainerStarted","Data":"d4e2c43909189402a8da2c48e6e5b68e1f545ae272f50f370218dad66a5b9747"} Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.399660 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fgf8c" Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.413906 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-5rz7t" podStartSLOduration=4.609693141 podStartE2EDuration="38.413888129s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:11.721483187 +0000 UTC m=+968.725732149" lastFinishedPulling="2026-02-02 11:13:45.525678175 +0000 UTC m=+1002.529927137" observedRunningTime="2026-02-02 11:13:47.412720068 +0000 UTC m=+1004.416969030" watchObservedRunningTime="2026-02-02 11:13:47.413888129 +0000 UTC m=+1004.418137101" Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.421547 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-swkbc" podStartSLOduration=9.255449715 podStartE2EDuration="38.421521433s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:11.220298547 +0000 UTC m=+968.224547509" lastFinishedPulling="2026-02-02 11:13:40.386370255 +0000 UTC m=+997.390619227" observedRunningTime="2026-02-02 11:13:47.381271638 +0000 UTC m=+1004.385520610" watchObservedRunningTime="2026-02-02 11:13:47.421521433 +0000 UTC m=+1004.425770405" Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.430919 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-8mpnq" event={"ID":"07bdcdf5-a330-4524-9695-d089c2fbd4ae","Type":"ContainerStarted","Data":"5d8fd27d4dfb76dbbf0fda8ce516a878f81891a2bcbd66b297177d93202229d9"} Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.431185 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-8mpnq" Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.438766 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-4lhnh" event={"ID":"fc69d485-23dc-4c0c-88ef-9fc6729d977d","Type":"ContainerStarted","Data":"5e1956815796f6a86f45435c825b33850b393d7235d012d40d1a61e76815bdde"} Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.445211 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-4lhnh" Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.462713 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" podStartSLOduration=37.462696423 podStartE2EDuration="37.462696423s" podCreationTimestamp="2026-02-02 11:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:13:47.460108964 +0000 UTC m=+1004.464357926" watchObservedRunningTime="2026-02-02 11:13:47.462696423 +0000 UTC m=+1004.466945385" Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.484108 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-zvg88" event={"ID":"6f64f1b5-8b8f-48b6-934c-5d148565b151","Type":"ContainerStarted","Data":"058e2c7c2fac3b5539c0b0e4e6a1960a8e35f52fa17f511523b148a8b49e8caa"} Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.484148 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-zvg88" Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.533026 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fgf8c" podStartSLOduration=9.220370245 podStartE2EDuration="38.533010601s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:11.061004849 +0000 UTC m=+968.065253811" lastFinishedPulling="2026-02-02 11:13:40.373645205 +0000 UTC m=+997.377894167" observedRunningTime="2026-02-02 11:13:47.531685446 +0000 UTC m=+1004.535934418" watchObservedRunningTime="2026-02-02 11:13:47.533010601 +0000 UTC m=+1004.537259563" Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.533970 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-v4m7x" podStartSLOduration=10.590652316 podStartE2EDuration="38.533963597s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:11.709853097 +0000 UTC m=+968.714102059" lastFinishedPulling="2026-02-02 11:13:39.653164388 +0000 UTC m=+996.657413340" observedRunningTime="2026-02-02 11:13:47.51236904 +0000 UTC m=+1004.516617992" watchObservedRunningTime="2026-02-02 11:13:47.533963597 +0000 UTC m=+1004.538212559" Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.562507 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-4lhnh" podStartSLOduration=4.759579555 podStartE2EDuration="38.562490339s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:11.722044462 +0000 UTC m=+968.726293424" lastFinishedPulling="2026-02-02 11:13:45.524955246 +0000 UTC m=+1002.529204208" observedRunningTime="2026-02-02 11:13:47.55579134 +0000 UTC m=+1004.560040302" watchObservedRunningTime="2026-02-02 11:13:47.562490339 +0000 UTC m=+1004.566739301" Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.586439 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-8mpnq" podStartSLOduration=4.787293325 podStartE2EDuration="38.586416238s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:11.726365267 +0000 UTC m=+968.730614229" lastFinishedPulling="2026-02-02 11:13:45.52548818 +0000 UTC m=+1002.529737142" observedRunningTime="2026-02-02 11:13:47.583814709 +0000 UTC m=+1004.588063671" watchObservedRunningTime="2026-02-02 11:13:47.586416238 +0000 UTC m=+1004.590665200" Feb 02 11:13:47 crc kubenswrapper[4925]: I0202 11:13:47.694090 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-zvg88" podStartSLOduration=8.9768717 podStartE2EDuration="38.694055304s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:10.664740052 +0000 UTC m=+967.668989014" lastFinishedPulling="2026-02-02 11:13:40.381923656 +0000 UTC m=+997.386172618" observedRunningTime="2026-02-02 11:13:47.624604699 +0000 UTC m=+1004.628853661" watchObservedRunningTime="2026-02-02 11:13:47.694055304 +0000 UTC m=+1004.698304266" Feb 02 11:13:48 crc kubenswrapper[4925]: I0202 11:13:48.496900 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mfxvn" event={"ID":"8405a39c-7526-47b8-93b8-b9bb03cb970b","Type":"ContainerStarted","Data":"83be78e20eb7dd30e77e7296f8962eeb3eea9355cd2b1103768c42bc518ed51c"} Feb 02 11:13:48 crc kubenswrapper[4925]: I0202 11:13:48.518851 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mfxvn" podStartSLOduration=3.317035238 podStartE2EDuration="39.518830139s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:11.221484179 +0000 UTC m=+968.225733141" lastFinishedPulling="2026-02-02 11:13:47.42327908 +0000 UTC m=+1004.427528042" observedRunningTime="2026-02-02 11:13:48.516696641 +0000 UTC m=+1005.520945603" watchObservedRunningTime="2026-02-02 11:13:48.518830139 +0000 UTC m=+1005.523079101" Feb 02 11:13:50 crc kubenswrapper[4925]: I0202 11:13:50.057262 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-wggcm" Feb 02 11:13:50 crc kubenswrapper[4925]: I0202 11:13:50.060182 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mfxvn" Feb 02 11:13:50 crc kubenswrapper[4925]: I0202 11:13:50.294014 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-zksqs" Feb 02 11:13:56 crc kubenswrapper[4925]: I0202 11:13:56.571753 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5d4f579c97-rrqkc" Feb 02 11:13:59 crc kubenswrapper[4925]: I0202 11:13:59.637970 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5qxgq" Feb 02 11:13:59 crc kubenswrapper[4925]: I0202 11:13:59.661380 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-zvg88" Feb 02 11:13:59 crc kubenswrapper[4925]: I0202 11:13:59.698757 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-56b8d567c6-9sb76" Feb 02 11:13:59 crc kubenswrapper[4925]: I0202 11:13:59.878105 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fgf8c" Feb 02 11:14:00 crc kubenswrapper[4925]: I0202 11:14:00.017321 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-swkbc" Feb 02 11:14:00 crc kubenswrapper[4925]: I0202 11:14:00.062453 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-mfxvn" Feb 02 11:14:00 crc kubenswrapper[4925]: I0202 11:14:00.159707 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-v4m7x" Feb 02 11:14:00 crc kubenswrapper[4925]: I0202 11:14:00.458619 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-5rz7t" Feb 02 11:14:00 crc kubenswrapper[4925]: I0202 11:14:00.777836 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-4lhnh" Feb 02 11:14:00 crc kubenswrapper[4925]: I0202 11:14:00.872211 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-8mpnq" Feb 02 11:14:07 crc kubenswrapper[4925]: E0202 11:14:07.986257 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:a504ab83288310bbd8e39f3a01faaa3c210a14d94bbd32124e9eadd46227d6b3" Feb 02 11:14:07 crc kubenswrapper[4925]: E0202 11:14:07.986817 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:a504ab83288310bbd8e39f3a01faaa3c210a14d94bbd32124e9eadd46227d6b3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndd72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-79955696d6-m9rb5_openstack-operators(9b6aadaa-89ca-46f2-bf48-59726671b789): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:14:07 crc kubenswrapper[4925]: E0202 11:14:07.988268 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" podUID="9b6aadaa-89ca-46f2-bf48-59726671b789" Feb 02 11:14:09 crc kubenswrapper[4925]: E0202 11:14:09.075565 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:a504ab83288310bbd8e39f3a01faaa3c210a14d94bbd32124e9eadd46227d6b3\\\"\"" pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" podUID="9b6aadaa-89ca-46f2-bf48-59726671b789" Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.674205 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-k579v" event={"ID":"ae37dc52-0e8c-41b3-9c07-7ce321c5e2a0","Type":"ContainerStarted","Data":"d23fb4761349ed82ccc424afc8de11fd7ccb951517528a66499ab30d45524910"} Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.674570 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-zrg4p" event={"ID":"88bf0458-e0ab-4b1b-ad4d-01e0f51780e8","Type":"ContainerStarted","Data":"3015588a835e84b1a63fe76c91084d84e6443db457f20ee0d9c0358d7ef77fe5"} Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.674758 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-zrg4p" Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.674831 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-k579v" Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.676840 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-gbm72" event={"ID":"2ce3d469-8592-45c6-aba0-f1a607694c6d","Type":"ContainerStarted","Data":"46413fc40d454a7cd563845271ffe39265b2f10f53bec1d3d164b4af3583c564"} Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.677381 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-gbm72" Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.687701 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" event={"ID":"a4e64115-b62c-421f-8072-88fc52eef59e","Type":"ContainerStarted","Data":"954246d158bb8caf48106e8e6de91ced20a17aa5e5c1476a8a9e90b97cf773f6"} Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.687880 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.689732 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-f9rbf" event={"ID":"6db50ed1-76a9-48ad-b08e-07edd9d07421","Type":"ContainerStarted","Data":"8263f81e64ea369c02876c65bfa4793f197f8e0c44032ebbc9c48723170d62e4"} Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.690058 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-f9rbf" Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.691121 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kbc5t" event={"ID":"a8a71810-ebcf-4908-8e41-73fdce287188","Type":"ContainerStarted","Data":"7a17314787eeb1302af8c43dfd05c7b2e6b72489c7da9af323df83d9e1e624c7"} Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.691315 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kbc5t" Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.692447 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bfkmp" event={"ID":"252fe85c-1645-4a4b-bd66-efe5814e9b09","Type":"ContainerStarted","Data":"938ea6e70db63bdcdd5e6043c5bd192f62d93d3a8638a4e3b20ba6de5b313ef0"} Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.692674 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bfkmp" Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.693959 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8nf8m" event={"ID":"7b8e50f8-9611-4be4-aa4e-a0834ec27a24","Type":"ContainerStarted","Data":"d12e7df82c0c5a60dccde51547025e461b66b2f1d82655ba56584a8f8a7a9f9a"} Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.694226 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8nf8m" Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.695546 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vw6m6" event={"ID":"21d85aaf-29ca-4cc9-8831-bb5691bc29d9","Type":"ContainerStarted","Data":"74c5bccbc77523621da8dc918c714a504b6d360af7275187f9a0646fae3483f1"} Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.716360 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-k579v" podStartSLOduration=3.810825785 podStartE2EDuration="1m3.716343671s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:11.662699362 +0000 UTC m=+968.666948324" lastFinishedPulling="2026-02-02 11:14:11.568217238 +0000 UTC m=+1028.572466210" observedRunningTime="2026-02-02 11:14:12.714047459 +0000 UTC m=+1029.718296421" watchObservedRunningTime="2026-02-02 11:14:12.716343671 +0000 UTC m=+1029.720592633" Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.792025 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-zrg4p" podStartSLOduration=3.677386641 podStartE2EDuration="1m3.792003842s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:11.454906042 +0000 UTC m=+968.459155004" lastFinishedPulling="2026-02-02 11:14:11.569523213 +0000 UTC m=+1028.573772205" observedRunningTime="2026-02-02 11:14:12.752850746 +0000 UTC m=+1029.757099708" watchObservedRunningTime="2026-02-02 11:14:12.792003842 +0000 UTC m=+1029.796252804" Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.794368 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-gbm72" podStartSLOduration=3.920476791 podStartE2EDuration="1m3.794358985s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:11.695111095 +0000 UTC m=+968.699360057" lastFinishedPulling="2026-02-02 11:14:11.568993289 +0000 UTC m=+1028.573242251" observedRunningTime="2026-02-02 11:14:12.788234391 +0000 UTC m=+1029.792483353" watchObservedRunningTime="2026-02-02 11:14:12.794358985 +0000 UTC m=+1029.798607947" Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.811432 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-f9rbf" podStartSLOduration=3.442053589 podStartE2EDuration="1m3.811412761s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:11.199625677 +0000 UTC m=+968.203874639" lastFinishedPulling="2026-02-02 11:14:11.568984849 +0000 UTC m=+1028.573233811" observedRunningTime="2026-02-02 11:14:12.80540257 +0000 UTC m=+1029.809651532" watchObservedRunningTime="2026-02-02 11:14:12.811412761 +0000 UTC m=+1029.815661723" Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.860550 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bfkmp" podStartSLOduration=3.735875373 podStartE2EDuration="1m3.860531383s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:11.452664702 +0000 UTC m=+968.456913664" lastFinishedPulling="2026-02-02 11:14:11.577320712 +0000 UTC m=+1028.581569674" observedRunningTime="2026-02-02 11:14:12.826951626 +0000 UTC m=+1029.831200588" watchObservedRunningTime="2026-02-02 11:14:12.860531383 +0000 UTC m=+1029.864780345" Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.866792 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" podStartSLOduration=38.277782113 podStartE2EDuration="1m3.866770089s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:45.979815198 +0000 UTC m=+1002.984064160" lastFinishedPulling="2026-02-02 11:14:11.568803164 +0000 UTC m=+1028.573052136" observedRunningTime="2026-02-02 11:14:12.859557777 +0000 UTC m=+1029.863806739" watchObservedRunningTime="2026-02-02 11:14:12.866770089 +0000 UTC m=+1029.871019051" Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.878103 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vw6m6" podStartSLOduration=3.112671175 podStartE2EDuration="1m2.878062971s" podCreationTimestamp="2026-02-02 11:13:10 +0000 UTC" firstStartedPulling="2026-02-02 11:13:11.803838819 +0000 UTC m=+968.808087781" lastFinishedPulling="2026-02-02 11:14:11.569230615 +0000 UTC m=+1028.573479577" observedRunningTime="2026-02-02 11:14:12.873103099 +0000 UTC m=+1029.877352081" watchObservedRunningTime="2026-02-02 11:14:12.878062971 +0000 UTC m=+1029.882311933" Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.892982 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kbc5t" podStartSLOduration=3.267545483 podStartE2EDuration="1m3.892958009s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:10.943268855 +0000 UTC m=+967.947517817" lastFinishedPulling="2026-02-02 11:14:11.568681381 +0000 UTC m=+1028.572930343" observedRunningTime="2026-02-02 11:14:12.89150492 +0000 UTC m=+1029.895753882" watchObservedRunningTime="2026-02-02 11:14:12.892958009 +0000 UTC m=+1029.897206981" Feb 02 11:14:12 crc kubenswrapper[4925]: I0202 11:14:12.914107 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8nf8m" podStartSLOduration=12.650245943 podStartE2EDuration="1m3.914053693s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:11.686969158 +0000 UTC m=+968.691218110" lastFinishedPulling="2026-02-02 11:14:02.950776898 +0000 UTC m=+1019.955025860" observedRunningTime="2026-02-02 11:14:12.906132281 +0000 UTC m=+1029.910381253" watchObservedRunningTime="2026-02-02 11:14:12.914053693 +0000 UTC m=+1029.918302675" Feb 02 11:14:19 crc kubenswrapper[4925]: I0202 11:14:19.891160 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-kbc5t" Feb 02 11:14:20 crc kubenswrapper[4925]: I0202 11:14:20.072054 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-f9rbf" Feb 02 11:14:20 crc kubenswrapper[4925]: I0202 11:14:20.242349 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8nf8m" Feb 02 11:14:20 crc kubenswrapper[4925]: I0202 11:14:20.293005 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bfkmp" Feb 02 11:14:20 crc kubenswrapper[4925]: I0202 11:14:20.365032 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-zrg4p" Feb 02 11:14:20 crc kubenswrapper[4925]: I0202 11:14:20.539134 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-k579v" Feb 02 11:14:20 crc kubenswrapper[4925]: I0202 11:14:20.958670 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-gbm72" Feb 02 11:14:22 crc kubenswrapper[4925]: I0202 11:14:22.126765 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8" Feb 02 11:14:22 crc kubenswrapper[4925]: I0202 11:14:22.755341 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" event={"ID":"9b6aadaa-89ca-46f2-bf48-59726671b789","Type":"ContainerStarted","Data":"ca3d686df46d55a6304def4a231fdb7fab693f4f47a4b47d6da053b6ce58cfaf"} Feb 02 11:14:22 crc kubenswrapper[4925]: I0202 11:14:22.755576 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" Feb 02 11:14:22 crc kubenswrapper[4925]: I0202 11:14:22.770910 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" podStartSLOduration=38.020555712 podStartE2EDuration="1m13.770893404s" podCreationTimestamp="2026-02-02 11:13:09 +0000 UTC" firstStartedPulling="2026-02-02 11:13:45.985582332 +0000 UTC m=+1002.989831304" lastFinishedPulling="2026-02-02 11:14:21.735920034 +0000 UTC m=+1038.740168996" observedRunningTime="2026-02-02 11:14:22.769121326 +0000 UTC m=+1039.773370308" watchObservedRunningTime="2026-02-02 11:14:22.770893404 +0000 UTC m=+1039.775142366" Feb 02 11:14:35 crc kubenswrapper[4925]: I0202 11:14:35.451640 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-m9rb5" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.051827 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9zjtb"] Feb 02 11:14:52 crc kubenswrapper[4925]: E0202 11:14:52.052804 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c8448f-d7a2-473d-9694-402273d86fc9" containerName="extract-content" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.052820 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c8448f-d7a2-473d-9694-402273d86fc9" containerName="extract-content" Feb 02 11:14:52 crc kubenswrapper[4925]: E0202 11:14:52.052836 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c8448f-d7a2-473d-9694-402273d86fc9" containerName="registry-server" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.052844 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c8448f-d7a2-473d-9694-402273d86fc9" containerName="registry-server" Feb 02 11:14:52 crc kubenswrapper[4925]: E0202 11:14:52.052860 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c8448f-d7a2-473d-9694-402273d86fc9" containerName="extract-utilities" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.052868 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c8448f-d7a2-473d-9694-402273d86fc9" containerName="extract-utilities" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.053043 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="46c8448f-d7a2-473d-9694-402273d86fc9" containerName="registry-server" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.053905 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9zjtb" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.058013 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-wzql7" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.058185 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.058333 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.058624 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.066876 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9zjtb"] Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.112476 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ksrb8"] Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.113843 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ksrb8" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.118586 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.129765 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ksrb8"] Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.152663 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72b52ea4-726d-472a-9614-cfdfeff2508a-config\") pod \"dnsmasq-dns-78dd6ddcc-ksrb8\" (UID: \"72b52ea4-726d-472a-9614-cfdfeff2508a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ksrb8" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.152751 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fa0562-8605-41e2-a10b-78cb1dae73a3-config\") pod \"dnsmasq-dns-675f4bcbfc-9zjtb\" (UID: \"13fa0562-8605-41e2-a10b-78cb1dae73a3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9zjtb" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.152796 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5b2h\" (UniqueName: \"kubernetes.io/projected/13fa0562-8605-41e2-a10b-78cb1dae73a3-kube-api-access-b5b2h\") pod \"dnsmasq-dns-675f4bcbfc-9zjtb\" (UID: \"13fa0562-8605-41e2-a10b-78cb1dae73a3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9zjtb" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.152822 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66gln\" (UniqueName: \"kubernetes.io/projected/72b52ea4-726d-472a-9614-cfdfeff2508a-kube-api-access-66gln\") pod \"dnsmasq-dns-78dd6ddcc-ksrb8\" (UID: \"72b52ea4-726d-472a-9614-cfdfeff2508a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ksrb8" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.152964 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72b52ea4-726d-472a-9614-cfdfeff2508a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ksrb8\" (UID: \"72b52ea4-726d-472a-9614-cfdfeff2508a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ksrb8" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.254211 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66gln\" (UniqueName: \"kubernetes.io/projected/72b52ea4-726d-472a-9614-cfdfeff2508a-kube-api-access-66gln\") pod \"dnsmasq-dns-78dd6ddcc-ksrb8\" (UID: \"72b52ea4-726d-472a-9614-cfdfeff2508a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ksrb8" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.254547 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72b52ea4-726d-472a-9614-cfdfeff2508a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ksrb8\" (UID: \"72b52ea4-726d-472a-9614-cfdfeff2508a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ksrb8" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.254697 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72b52ea4-726d-472a-9614-cfdfeff2508a-config\") pod \"dnsmasq-dns-78dd6ddcc-ksrb8\" (UID: \"72b52ea4-726d-472a-9614-cfdfeff2508a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ksrb8" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.254845 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fa0562-8605-41e2-a10b-78cb1dae73a3-config\") pod \"dnsmasq-dns-675f4bcbfc-9zjtb\" (UID: \"13fa0562-8605-41e2-a10b-78cb1dae73a3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9zjtb" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.254979 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5b2h\" (UniqueName: \"kubernetes.io/projected/13fa0562-8605-41e2-a10b-78cb1dae73a3-kube-api-access-b5b2h\") pod \"dnsmasq-dns-675f4bcbfc-9zjtb\" (UID: \"13fa0562-8605-41e2-a10b-78cb1dae73a3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9zjtb" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.255653 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72b52ea4-726d-472a-9614-cfdfeff2508a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ksrb8\" (UID: \"72b52ea4-726d-472a-9614-cfdfeff2508a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ksrb8" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.255863 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fa0562-8605-41e2-a10b-78cb1dae73a3-config\") pod \"dnsmasq-dns-675f4bcbfc-9zjtb\" (UID: \"13fa0562-8605-41e2-a10b-78cb1dae73a3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9zjtb" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.256529 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72b52ea4-726d-472a-9614-cfdfeff2508a-config\") pod \"dnsmasq-dns-78dd6ddcc-ksrb8\" (UID: \"72b52ea4-726d-472a-9614-cfdfeff2508a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ksrb8" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.286962 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5b2h\" (UniqueName: \"kubernetes.io/projected/13fa0562-8605-41e2-a10b-78cb1dae73a3-kube-api-access-b5b2h\") pod \"dnsmasq-dns-675f4bcbfc-9zjtb\" (UID: \"13fa0562-8605-41e2-a10b-78cb1dae73a3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9zjtb" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.286962 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66gln\" (UniqueName: \"kubernetes.io/projected/72b52ea4-726d-472a-9614-cfdfeff2508a-kube-api-access-66gln\") pod \"dnsmasq-dns-78dd6ddcc-ksrb8\" (UID: \"72b52ea4-726d-472a-9614-cfdfeff2508a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ksrb8" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.377210 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9zjtb" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.437135 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ksrb8" Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.830678 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9zjtb"] Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.911215 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ksrb8"] Feb 02 11:14:52 crc kubenswrapper[4925]: W0202 11:14:52.915009 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72b52ea4_726d_472a_9614_cfdfeff2508a.slice/crio-c1938e64d95dbe65e0abe21393fcaf385d40790cf000ec1bd7ad20e1ebe489a0 WatchSource:0}: Error finding container c1938e64d95dbe65e0abe21393fcaf385d40790cf000ec1bd7ad20e1ebe489a0: Status 404 returned error can't find the container with id c1938e64d95dbe65e0abe21393fcaf385d40790cf000ec1bd7ad20e1ebe489a0 Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.955513 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ksrb8" event={"ID":"72b52ea4-726d-472a-9614-cfdfeff2508a","Type":"ContainerStarted","Data":"c1938e64d95dbe65e0abe21393fcaf385d40790cf000ec1bd7ad20e1ebe489a0"} Feb 02 11:14:52 crc kubenswrapper[4925]: I0202 11:14:52.957732 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9zjtb" event={"ID":"13fa0562-8605-41e2-a10b-78cb1dae73a3","Type":"ContainerStarted","Data":"0e4ed60bb561001fa0e0015fb4eb1f58be563b7aee3d1763c28d4c5685bb1e8f"} Feb 02 11:14:54 crc kubenswrapper[4925]: I0202 11:14:54.904017 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9zjtb"] Feb 02 11:14:54 crc kubenswrapper[4925]: I0202 11:14:54.940364 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vpz4b"] Feb 02 11:14:54 crc kubenswrapper[4925]: I0202 11:14:54.941723 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" Feb 02 11:14:54 crc kubenswrapper[4925]: I0202 11:14:54.966110 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vpz4b"] Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.023700 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f41144d6-f6b8-4844-8cf4-31d25f69535e-config\") pod \"dnsmasq-dns-666b6646f7-vpz4b\" (UID: \"f41144d6-f6b8-4844-8cf4-31d25f69535e\") " pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.024099 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f41144d6-f6b8-4844-8cf4-31d25f69535e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vpz4b\" (UID: \"f41144d6-f6b8-4844-8cf4-31d25f69535e\") " pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.024190 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck8pk\" (UniqueName: \"kubernetes.io/projected/f41144d6-f6b8-4844-8cf4-31d25f69535e-kube-api-access-ck8pk\") pod \"dnsmasq-dns-666b6646f7-vpz4b\" (UID: \"f41144d6-f6b8-4844-8cf4-31d25f69535e\") " pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.124736 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f41144d6-f6b8-4844-8cf4-31d25f69535e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vpz4b\" (UID: \"f41144d6-f6b8-4844-8cf4-31d25f69535e\") " pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.124786 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck8pk\" (UniqueName: \"kubernetes.io/projected/f41144d6-f6b8-4844-8cf4-31d25f69535e-kube-api-access-ck8pk\") pod \"dnsmasq-dns-666b6646f7-vpz4b\" (UID: \"f41144d6-f6b8-4844-8cf4-31d25f69535e\") " pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.124828 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f41144d6-f6b8-4844-8cf4-31d25f69535e-config\") pod \"dnsmasq-dns-666b6646f7-vpz4b\" (UID: \"f41144d6-f6b8-4844-8cf4-31d25f69535e\") " pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.125765 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f41144d6-f6b8-4844-8cf4-31d25f69535e-config\") pod \"dnsmasq-dns-666b6646f7-vpz4b\" (UID: \"f41144d6-f6b8-4844-8cf4-31d25f69535e\") " pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.125794 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f41144d6-f6b8-4844-8cf4-31d25f69535e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vpz4b\" (UID: \"f41144d6-f6b8-4844-8cf4-31d25f69535e\") " pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.156978 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck8pk\" (UniqueName: \"kubernetes.io/projected/f41144d6-f6b8-4844-8cf4-31d25f69535e-kube-api-access-ck8pk\") pod \"dnsmasq-dns-666b6646f7-vpz4b\" (UID: \"f41144d6-f6b8-4844-8cf4-31d25f69535e\") " pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.210778 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ksrb8"] Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.243917 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-d6md8"] Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.253879 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-d6md8" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.259266 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-d6md8"] Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.280731 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.428431 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njnwf\" (UniqueName: \"kubernetes.io/projected/f4232060-525a-4d70-8c74-1bc8d38330d3-kube-api-access-njnwf\") pod \"dnsmasq-dns-57d769cc4f-d6md8\" (UID: \"f4232060-525a-4d70-8c74-1bc8d38330d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-d6md8" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.428535 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4232060-525a-4d70-8c74-1bc8d38330d3-config\") pod \"dnsmasq-dns-57d769cc4f-d6md8\" (UID: \"f4232060-525a-4d70-8c74-1bc8d38330d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-d6md8" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.428554 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4232060-525a-4d70-8c74-1bc8d38330d3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-d6md8\" (UID: \"f4232060-525a-4d70-8c74-1bc8d38330d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-d6md8" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.529811 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njnwf\" (UniqueName: \"kubernetes.io/projected/f4232060-525a-4d70-8c74-1bc8d38330d3-kube-api-access-njnwf\") pod \"dnsmasq-dns-57d769cc4f-d6md8\" (UID: \"f4232060-525a-4d70-8c74-1bc8d38330d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-d6md8" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.530276 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4232060-525a-4d70-8c74-1bc8d38330d3-config\") pod \"dnsmasq-dns-57d769cc4f-d6md8\" (UID: \"f4232060-525a-4d70-8c74-1bc8d38330d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-d6md8" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.530312 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4232060-525a-4d70-8c74-1bc8d38330d3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-d6md8\" (UID: \"f4232060-525a-4d70-8c74-1bc8d38330d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-d6md8" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.531050 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4232060-525a-4d70-8c74-1bc8d38330d3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-d6md8\" (UID: \"f4232060-525a-4d70-8c74-1bc8d38330d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-d6md8" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.531801 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4232060-525a-4d70-8c74-1bc8d38330d3-config\") pod \"dnsmasq-dns-57d769cc4f-d6md8\" (UID: \"f4232060-525a-4d70-8c74-1bc8d38330d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-d6md8" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.562514 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njnwf\" (UniqueName: \"kubernetes.io/projected/f4232060-525a-4d70-8c74-1bc8d38330d3-kube-api-access-njnwf\") pod \"dnsmasq-dns-57d769cc4f-d6md8\" (UID: \"f4232060-525a-4d70-8c74-1bc8d38330d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-d6md8" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.587058 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-d6md8" Feb 02 11:14:55 crc kubenswrapper[4925]: I0202 11:14:55.826245 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vpz4b"] Feb 02 11:14:55 crc kubenswrapper[4925]: W0202 11:14:55.877938 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf41144d6_f6b8_4844_8cf4_31d25f69535e.slice/crio-79294bfc69a332648f95433d9a9f41330969bd94a822a4164bbb37a4403ef10a WatchSource:0}: Error finding container 79294bfc69a332648f95433d9a9f41330969bd94a822a4164bbb37a4403ef10a: Status 404 returned error can't find the container with id 79294bfc69a332648f95433d9a9f41330969bd94a822a4164bbb37a4403ef10a Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.000516 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" event={"ID":"f41144d6-f6b8-4844-8cf4-31d25f69535e","Type":"ContainerStarted","Data":"79294bfc69a332648f95433d9a9f41330969bd94a822a4164bbb37a4403ef10a"} Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.093557 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-d6md8"] Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.097385 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.100122 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.102205 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-rbf7n" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.102519 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.102990 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.103118 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.103200 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.103370 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.104975 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.124542 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.240993 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.241685 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.242225 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.242260 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.242309 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.242367 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9sbc\" (UniqueName: \"kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-kube-api-access-l9sbc\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.242460 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0d6b9691-80b3-418b-a4c7-fc80c0438123-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.242491 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.242542 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0d6b9691-80b3-418b-a4c7-fc80c0438123-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.242598 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-config-data\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.242626 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.344161 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.344223 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.344275 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.344296 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.345119 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.345167 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.345190 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9sbc\" (UniqueName: \"kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-kube-api-access-l9sbc\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.345260 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0d6b9691-80b3-418b-a4c7-fc80c0438123-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.345295 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.345331 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0d6b9691-80b3-418b-a4c7-fc80c0438123-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.345378 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-config-data\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.345748 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.345747 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.346235 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-config-data\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.346785 4925 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.347353 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.349149 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.349342 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.349719 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0d6b9691-80b3-418b-a4c7-fc80c0438123-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.352833 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0d6b9691-80b3-418b-a4c7-fc80c0438123-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.354698 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.363993 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.364129 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9sbc\" (UniqueName: \"kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-kube-api-access-l9sbc\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.365791 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.374258 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.374483 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.374606 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bjhdc" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.374724 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.374794 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.374860 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.375048 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.381761 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.383893 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.437852 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.547816 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.547865 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.547895 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/435dc982-a475-4753-81d0-58bff20a6f17-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.547912 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.547950 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrhxg\" (UniqueName: \"kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-kube-api-access-nrhxg\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.547972 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/435dc982-a475-4753-81d0-58bff20a6f17-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.547996 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.548013 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.548067 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.548109 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.548129 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.650114 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.650197 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.650291 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.650348 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.650377 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.650423 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.650453 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.650506 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/435dc982-a475-4753-81d0-58bff20a6f17-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.650533 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.650587 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrhxg\" (UniqueName: \"kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-kube-api-access-nrhxg\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.650611 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/435dc982-a475-4753-81d0-58bff20a6f17-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.651211 4925 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.652237 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.652635 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.652716 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.652884 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.654372 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.657253 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/435dc982-a475-4753-81d0-58bff20a6f17-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.659003 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.659685 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/435dc982-a475-4753-81d0-58bff20a6f17-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.664006 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.683497 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrhxg\" (UniqueName: \"kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-kube-api-access-nrhxg\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.697483 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:56 crc kubenswrapper[4925]: I0202 11:14:56.771909 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.014301 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-d6md8" event={"ID":"f4232060-525a-4d70-8c74-1bc8d38330d3","Type":"ContainerStarted","Data":"76465ba62fe01251b3c5c3c692de0ccbe0801a25cc30e76798145faf9b1aa4a6"} Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.060483 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.315822 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.705866 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.706973 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.711427 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.711596 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.717777 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7rbsn" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.718083 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.727665 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.734992 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.785222 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-config-data-default\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.785264 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.785287 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.785581 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.785691 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.785719 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmhct\" (UniqueName: \"kubernetes.io/projected/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-kube-api-access-wmhct\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.785801 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.785886 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-kolla-config\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.887358 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.887590 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.887666 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmhct\" (UniqueName: \"kubernetes.io/projected/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-kube-api-access-wmhct\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.887752 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.887929 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-kolla-config\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.888021 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-config-data-default\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.888123 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.888227 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.888539 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.888593 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-kolla-config\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.888552 4925 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.888838 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-config-data-default\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.889382 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.891799 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.904575 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmhct\" (UniqueName: \"kubernetes.io/projected/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-kube-api-access-wmhct\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.905831 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:57 crc kubenswrapper[4925]: I0202 11:14:57.906597 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d99509bd-1ed8-4516-8ed2-8d99b8e33c67-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d99509bd-1ed8-4516-8ed2-8d99b8e33c67\") " pod="openstack/openstack-galera-0" Feb 02 11:14:58 crc kubenswrapper[4925]: I0202 11:14:58.021843 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0d6b9691-80b3-418b-a4c7-fc80c0438123","Type":"ContainerStarted","Data":"662a4022280c3ce465866739ffa66ee8b75ebbca99846bbdc7f241d080127d22"} Feb 02 11:14:58 crc kubenswrapper[4925]: I0202 11:14:58.022755 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"435dc982-a475-4753-81d0-58bff20a6f17","Type":"ContainerStarted","Data":"2c64223f10e43efac54cf9ed0bab0882368c04e9b77b5be71aaf6da61b9cb061"} Feb 02 11:14:58 crc kubenswrapper[4925]: I0202 11:14:58.042438 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 11:14:58 crc kubenswrapper[4925]: I0202 11:14:58.499838 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 11:14:58 crc kubenswrapper[4925]: W0202 11:14:58.502872 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd99509bd_1ed8_4516_8ed2_8d99b8e33c67.slice/crio-b8bc86b9eb1d7dd4c2359f5c950f9eaa19653dc9e62c51be6307e405a654ec57 WatchSource:0}: Error finding container b8bc86b9eb1d7dd4c2359f5c950f9eaa19653dc9e62c51be6307e405a654ec57: Status 404 returned error can't find the container with id b8bc86b9eb1d7dd4c2359f5c950f9eaa19653dc9e62c51be6307e405a654ec57 Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.029261 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d99509bd-1ed8-4516-8ed2-8d99b8e33c67","Type":"ContainerStarted","Data":"b8bc86b9eb1d7dd4c2359f5c950f9eaa19653dc9e62c51be6307e405a654ec57"} Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.100431 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.102199 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.106986 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.107487 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.107715 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.107867 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.108038 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-s5q9x" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.209507 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d4545e-f93a-4767-bba7-d01bcaf43c4f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.209559 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/64d4545e-f93a-4767-bba7-d01bcaf43c4f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.209622 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d4545e-f93a-4767-bba7-d01bcaf43c4f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.209650 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/64d4545e-f93a-4767-bba7-d01bcaf43c4f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.209671 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64d4545e-f93a-4767-bba7-d01bcaf43c4f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.209729 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8wsp\" (UniqueName: \"kubernetes.io/projected/64d4545e-f93a-4767-bba7-d01bcaf43c4f-kube-api-access-r8wsp\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.209776 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/64d4545e-f93a-4767-bba7-d01bcaf43c4f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.209801 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.311647 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8wsp\" (UniqueName: \"kubernetes.io/projected/64d4545e-f93a-4767-bba7-d01bcaf43c4f-kube-api-access-r8wsp\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.311777 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/64d4545e-f93a-4767-bba7-d01bcaf43c4f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.311826 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.311869 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d4545e-f93a-4767-bba7-d01bcaf43c4f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.311894 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/64d4545e-f93a-4767-bba7-d01bcaf43c4f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.311968 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d4545e-f93a-4767-bba7-d01bcaf43c4f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.312002 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/64d4545e-f93a-4767-bba7-d01bcaf43c4f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.312033 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64d4545e-f93a-4767-bba7-d01bcaf43c4f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.312584 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/64d4545e-f93a-4767-bba7-d01bcaf43c4f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.312696 4925 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.312914 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/64d4545e-f93a-4767-bba7-d01bcaf43c4f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.312951 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/64d4545e-f93a-4767-bba7-d01bcaf43c4f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.315041 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64d4545e-f93a-4767-bba7-d01bcaf43c4f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.319365 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d4545e-f93a-4767-bba7-d01bcaf43c4f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.320186 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d4545e-f93a-4767-bba7-d01bcaf43c4f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.332205 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8wsp\" (UniqueName: \"kubernetes.io/projected/64d4545e-f93a-4767-bba7-d01bcaf43c4f-kube-api-access-r8wsp\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.337926 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"64d4545e-f93a-4767-bba7-d01bcaf43c4f\") " pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.471451 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.497227 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.503395 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.511065 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.511385 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-wz8cs" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.511880 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.562116 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.623581 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6bf67c7c-0e93-499e-9530-735520afac74-kolla-config\") pod \"memcached-0\" (UID: \"6bf67c7c-0e93-499e-9530-735520afac74\") " pod="openstack/memcached-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.623720 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bf67c7c-0e93-499e-9530-735520afac74-config-data\") pod \"memcached-0\" (UID: \"6bf67c7c-0e93-499e-9530-735520afac74\") " pod="openstack/memcached-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.623809 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf67c7c-0e93-499e-9530-735520afac74-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6bf67c7c-0e93-499e-9530-735520afac74\") " pod="openstack/memcached-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.623846 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhpgj\" (UniqueName: \"kubernetes.io/projected/6bf67c7c-0e93-499e-9530-735520afac74-kube-api-access-qhpgj\") pod \"memcached-0\" (UID: \"6bf67c7c-0e93-499e-9530-735520afac74\") " pod="openstack/memcached-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.624070 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf67c7c-0e93-499e-9530-735520afac74-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6bf67c7c-0e93-499e-9530-735520afac74\") " pod="openstack/memcached-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.725282 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6bf67c7c-0e93-499e-9530-735520afac74-kolla-config\") pod \"memcached-0\" (UID: \"6bf67c7c-0e93-499e-9530-735520afac74\") " pod="openstack/memcached-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.725327 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bf67c7c-0e93-499e-9530-735520afac74-config-data\") pod \"memcached-0\" (UID: \"6bf67c7c-0e93-499e-9530-735520afac74\") " pod="openstack/memcached-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.725400 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf67c7c-0e93-499e-9530-735520afac74-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6bf67c7c-0e93-499e-9530-735520afac74\") " pod="openstack/memcached-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.725416 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhpgj\" (UniqueName: \"kubernetes.io/projected/6bf67c7c-0e93-499e-9530-735520afac74-kube-api-access-qhpgj\") pod \"memcached-0\" (UID: \"6bf67c7c-0e93-499e-9530-735520afac74\") " pod="openstack/memcached-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.725452 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf67c7c-0e93-499e-9530-735520afac74-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6bf67c7c-0e93-499e-9530-735520afac74\") " pod="openstack/memcached-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.727368 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6bf67c7c-0e93-499e-9530-735520afac74-kolla-config\") pod \"memcached-0\" (UID: \"6bf67c7c-0e93-499e-9530-735520afac74\") " pod="openstack/memcached-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.727941 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bf67c7c-0e93-499e-9530-735520afac74-config-data\") pod \"memcached-0\" (UID: \"6bf67c7c-0e93-499e-9530-735520afac74\") " pod="openstack/memcached-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.735447 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf67c7c-0e93-499e-9530-735520afac74-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6bf67c7c-0e93-499e-9530-735520afac74\") " pod="openstack/memcached-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.736190 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf67c7c-0e93-499e-9530-735520afac74-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6bf67c7c-0e93-499e-9530-735520afac74\") " pod="openstack/memcached-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.763785 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhpgj\" (UniqueName: \"kubernetes.io/projected/6bf67c7c-0e93-499e-9530-735520afac74-kube-api-access-qhpgj\") pod \"memcached-0\" (UID: \"6bf67c7c-0e93-499e-9530-735520afac74\") " pod="openstack/memcached-0" Feb 02 11:14:59 crc kubenswrapper[4925]: I0202 11:14:59.872467 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.060189 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 11:15:00 crc kubenswrapper[4925]: W0202 11:15:00.068397 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64d4545e_f93a_4767_bba7_d01bcaf43c4f.slice/crio-0a63a793ef980191098deca5fc489af6a277c2113e98ccfee6b682c3a053bb1c WatchSource:0}: Error finding container 0a63a793ef980191098deca5fc489af6a277c2113e98ccfee6b682c3a053bb1c: Status 404 returned error can't find the container with id 0a63a793ef980191098deca5fc489af6a277c2113e98ccfee6b682c3a053bb1c Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.150939 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp"] Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.156042 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp" Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.158377 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.158495 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.171368 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp"] Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.236965 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnn67\" (UniqueName: \"kubernetes.io/projected/f8a4ad92-ad81-471a-9495-15b9398f8eb4-kube-api-access-qnn67\") pod \"collect-profiles-29500515-g7vlp\" (UID: \"f8a4ad92-ad81-471a-9495-15b9398f8eb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp" Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.237064 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8a4ad92-ad81-471a-9495-15b9398f8eb4-secret-volume\") pod \"collect-profiles-29500515-g7vlp\" (UID: \"f8a4ad92-ad81-471a-9495-15b9398f8eb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp" Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.237396 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a4ad92-ad81-471a-9495-15b9398f8eb4-config-volume\") pod \"collect-profiles-29500515-g7vlp\" (UID: \"f8a4ad92-ad81-471a-9495-15b9398f8eb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp" Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.338770 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a4ad92-ad81-471a-9495-15b9398f8eb4-config-volume\") pod \"collect-profiles-29500515-g7vlp\" (UID: \"f8a4ad92-ad81-471a-9495-15b9398f8eb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp" Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.338887 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnn67\" (UniqueName: \"kubernetes.io/projected/f8a4ad92-ad81-471a-9495-15b9398f8eb4-kube-api-access-qnn67\") pod \"collect-profiles-29500515-g7vlp\" (UID: \"f8a4ad92-ad81-471a-9495-15b9398f8eb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp" Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.338925 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8a4ad92-ad81-471a-9495-15b9398f8eb4-secret-volume\") pod \"collect-profiles-29500515-g7vlp\" (UID: \"f8a4ad92-ad81-471a-9495-15b9398f8eb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp" Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.339625 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a4ad92-ad81-471a-9495-15b9398f8eb4-config-volume\") pod \"collect-profiles-29500515-g7vlp\" (UID: \"f8a4ad92-ad81-471a-9495-15b9398f8eb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp" Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.345117 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8a4ad92-ad81-471a-9495-15b9398f8eb4-secret-volume\") pod \"collect-profiles-29500515-g7vlp\" (UID: \"f8a4ad92-ad81-471a-9495-15b9398f8eb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp" Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.354362 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnn67\" (UniqueName: \"kubernetes.io/projected/f8a4ad92-ad81-471a-9495-15b9398f8eb4-kube-api-access-qnn67\") pod \"collect-profiles-29500515-g7vlp\" (UID: \"f8a4ad92-ad81-471a-9495-15b9398f8eb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp" Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.494292 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 11:15:00 crc kubenswrapper[4925]: W0202 11:15:00.511195 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bf67c7c_0e93_499e_9530_735520afac74.slice/crio-8dc44790dabd595d35c9541d8c4f7bf88341de87cf61772ece8cf7be67549dce WatchSource:0}: Error finding container 8dc44790dabd595d35c9541d8c4f7bf88341de87cf61772ece8cf7be67549dce: Status 404 returned error can't find the container with id 8dc44790dabd595d35c9541d8c4f7bf88341de87cf61772ece8cf7be67549dce Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.521150 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp" Feb 02 11:15:00 crc kubenswrapper[4925]: I0202 11:15:00.943505 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp"] Feb 02 11:15:00 crc kubenswrapper[4925]: W0202 11:15:00.971220 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8a4ad92_ad81_471a_9495_15b9398f8eb4.slice/crio-acab0ed328537a4c45ab344626ad04a9187874c1967d3d05ba086c9983ec57e8 WatchSource:0}: Error finding container acab0ed328537a4c45ab344626ad04a9187874c1967d3d05ba086c9983ec57e8: Status 404 returned error can't find the container with id acab0ed328537a4c45ab344626ad04a9187874c1967d3d05ba086c9983ec57e8 Feb 02 11:15:01 crc kubenswrapper[4925]: I0202 11:15:01.049357 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6bf67c7c-0e93-499e-9530-735520afac74","Type":"ContainerStarted","Data":"8dc44790dabd595d35c9541d8c4f7bf88341de87cf61772ece8cf7be67549dce"} Feb 02 11:15:01 crc kubenswrapper[4925]: I0202 11:15:01.051462 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp" event={"ID":"f8a4ad92-ad81-471a-9495-15b9398f8eb4","Type":"ContainerStarted","Data":"acab0ed328537a4c45ab344626ad04a9187874c1967d3d05ba086c9983ec57e8"} Feb 02 11:15:01 crc kubenswrapper[4925]: I0202 11:15:01.052997 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"64d4545e-f93a-4767-bba7-d01bcaf43c4f","Type":"ContainerStarted","Data":"0a63a793ef980191098deca5fc489af6a277c2113e98ccfee6b682c3a053bb1c"} Feb 02 11:15:01 crc kubenswrapper[4925]: I0202 11:15:01.152678 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 11:15:01 crc kubenswrapper[4925]: I0202 11:15:01.153655 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 11:15:01 crc kubenswrapper[4925]: I0202 11:15:01.155884 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-qtmvm" Feb 02 11:15:01 crc kubenswrapper[4925]: I0202 11:15:01.159236 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 11:15:01 crc kubenswrapper[4925]: I0202 11:15:01.255195 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxr2c\" (UniqueName: \"kubernetes.io/projected/c60f25de-220a-4eb1-b1da-30faf1a27cf8-kube-api-access-qxr2c\") pod \"kube-state-metrics-0\" (UID: \"c60f25de-220a-4eb1-b1da-30faf1a27cf8\") " pod="openstack/kube-state-metrics-0" Feb 02 11:15:01 crc kubenswrapper[4925]: I0202 11:15:01.356891 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxr2c\" (UniqueName: \"kubernetes.io/projected/c60f25de-220a-4eb1-b1da-30faf1a27cf8-kube-api-access-qxr2c\") pod \"kube-state-metrics-0\" (UID: \"c60f25de-220a-4eb1-b1da-30faf1a27cf8\") " pod="openstack/kube-state-metrics-0" Feb 02 11:15:01 crc kubenswrapper[4925]: I0202 11:15:01.378197 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxr2c\" (UniqueName: \"kubernetes.io/projected/c60f25de-220a-4eb1-b1da-30faf1a27cf8-kube-api-access-qxr2c\") pod \"kube-state-metrics-0\" (UID: \"c60f25de-220a-4eb1-b1da-30faf1a27cf8\") " pod="openstack/kube-state-metrics-0" Feb 02 11:15:01 crc kubenswrapper[4925]: I0202 11:15:01.482720 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 11:15:01 crc kubenswrapper[4925]: I0202 11:15:01.955715 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 11:15:02 crc kubenswrapper[4925]: I0202 11:15:02.063163 4925 generic.go:334] "Generic (PLEG): container finished" podID="f8a4ad92-ad81-471a-9495-15b9398f8eb4" containerID="fa9c5f0c1d1526483a03d5dc5a9db99b8d216756d18f9b67b429d0223aab3dd2" exitCode=0 Feb 02 11:15:02 crc kubenswrapper[4925]: I0202 11:15:02.063226 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp" event={"ID":"f8a4ad92-ad81-471a-9495-15b9398f8eb4","Type":"ContainerDied","Data":"fa9c5f0c1d1526483a03d5dc5a9db99b8d216756d18f9b67b429d0223aab3dd2"} Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.232161 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gr5rf"] Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.234159 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.239897 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.240068 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.240461 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-s69hn" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.244108 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-26w5s"] Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.246135 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.254192 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gr5rf"] Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.266089 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-26w5s"] Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.327750 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/feb2b36a-609f-4805-8b50-fe0731522375-ovn-controller-tls-certs\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.327818 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d118fb79-debc-4d5d-b390-38f913681237-etc-ovs\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.327850 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d118fb79-debc-4d5d-b390-38f913681237-var-run\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.327882 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d118fb79-debc-4d5d-b390-38f913681237-scripts\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.327906 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/feb2b36a-609f-4805-8b50-fe0731522375-scripts\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.327990 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d118fb79-debc-4d5d-b390-38f913681237-var-log\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.328039 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz6jr\" (UniqueName: \"kubernetes.io/projected/feb2b36a-609f-4805-8b50-fe0731522375-kube-api-access-kz6jr\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.328149 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/feb2b36a-609f-4805-8b50-fe0731522375-var-run-ovn\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.328178 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb2b36a-609f-4805-8b50-fe0731522375-combined-ca-bundle\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.328341 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2zf9\" (UniqueName: \"kubernetes.io/projected/d118fb79-debc-4d5d-b390-38f913681237-kube-api-access-x2zf9\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.328388 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/feb2b36a-609f-4805-8b50-fe0731522375-var-log-ovn\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.328438 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/feb2b36a-609f-4805-8b50-fe0731522375-var-run\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.328461 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d118fb79-debc-4d5d-b390-38f913681237-var-lib\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.430265 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/feb2b36a-609f-4805-8b50-fe0731522375-ovn-controller-tls-certs\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.430315 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d118fb79-debc-4d5d-b390-38f913681237-etc-ovs\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.430348 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d118fb79-debc-4d5d-b390-38f913681237-var-run\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.430385 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d118fb79-debc-4d5d-b390-38f913681237-scripts\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.430409 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/feb2b36a-609f-4805-8b50-fe0731522375-scripts\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.430435 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz6jr\" (UniqueName: \"kubernetes.io/projected/feb2b36a-609f-4805-8b50-fe0731522375-kube-api-access-kz6jr\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.430454 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d118fb79-debc-4d5d-b390-38f913681237-var-log\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.430498 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/feb2b36a-609f-4805-8b50-fe0731522375-var-run-ovn\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.430522 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb2b36a-609f-4805-8b50-fe0731522375-combined-ca-bundle\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.430567 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2zf9\" (UniqueName: \"kubernetes.io/projected/d118fb79-debc-4d5d-b390-38f913681237-kube-api-access-x2zf9\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.430596 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/feb2b36a-609f-4805-8b50-fe0731522375-var-log-ovn\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.430624 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/feb2b36a-609f-4805-8b50-fe0731522375-var-run\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.430648 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d118fb79-debc-4d5d-b390-38f913681237-var-lib\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.430824 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d118fb79-debc-4d5d-b390-38f913681237-etc-ovs\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.430974 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d118fb79-debc-4d5d-b390-38f913681237-var-lib\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.431051 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d118fb79-debc-4d5d-b390-38f913681237-var-log\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.431193 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/feb2b36a-609f-4805-8b50-fe0731522375-var-run-ovn\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.433032 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d118fb79-debc-4d5d-b390-38f913681237-scripts\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.433112 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d118fb79-debc-4d5d-b390-38f913681237-var-run\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.433120 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/feb2b36a-609f-4805-8b50-fe0731522375-var-log-ovn\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.433126 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/feb2b36a-609f-4805-8b50-fe0731522375-var-run\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.433200 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/feb2b36a-609f-4805-8b50-fe0731522375-scripts\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.437714 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/feb2b36a-609f-4805-8b50-fe0731522375-ovn-controller-tls-certs\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.439997 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb2b36a-609f-4805-8b50-fe0731522375-combined-ca-bundle\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.449164 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2zf9\" (UniqueName: \"kubernetes.io/projected/d118fb79-debc-4d5d-b390-38f913681237-kube-api-access-x2zf9\") pod \"ovn-controller-ovs-26w5s\" (UID: \"d118fb79-debc-4d5d-b390-38f913681237\") " pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.452830 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz6jr\" (UniqueName: \"kubernetes.io/projected/feb2b36a-609f-4805-8b50-fe0731522375-kube-api-access-kz6jr\") pod \"ovn-controller-gr5rf\" (UID: \"feb2b36a-609f-4805-8b50-fe0731522375\") " pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.563163 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gr5rf" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.571423 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.854310 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.855622 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.858443 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.858500 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.858617 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.858690 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-bkrgn" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.858774 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.885277 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.952409 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.952452 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0798cb6a-03c0-467e-b65f-05612b9213d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.952486 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0798cb6a-03c0-467e-b65f-05612b9213d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.952517 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0798cb6a-03c0-467e-b65f-05612b9213d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.952589 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0798cb6a-03c0-467e-b65f-05612b9213d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.952632 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fht97\" (UniqueName: \"kubernetes.io/projected/0798cb6a-03c0-467e-b65f-05612b9213d3-kube-api-access-fht97\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.952657 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0798cb6a-03c0-467e-b65f-05612b9213d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:05 crc kubenswrapper[4925]: I0202 11:15:05.952730 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0798cb6a-03c0-467e-b65f-05612b9213d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.053808 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.053850 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0798cb6a-03c0-467e-b65f-05612b9213d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.053873 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0798cb6a-03c0-467e-b65f-05612b9213d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.053897 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0798cb6a-03c0-467e-b65f-05612b9213d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.053939 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0798cb6a-03c0-467e-b65f-05612b9213d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.053967 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fht97\" (UniqueName: \"kubernetes.io/projected/0798cb6a-03c0-467e-b65f-05612b9213d3-kube-api-access-fht97\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.053985 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0798cb6a-03c0-467e-b65f-05612b9213d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.054029 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0798cb6a-03c0-467e-b65f-05612b9213d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.054195 4925 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.054503 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0798cb6a-03c0-467e-b65f-05612b9213d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.055115 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0798cb6a-03c0-467e-b65f-05612b9213d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.056009 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0798cb6a-03c0-467e-b65f-05612b9213d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.059350 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0798cb6a-03c0-467e-b65f-05612b9213d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.068814 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0798cb6a-03c0-467e-b65f-05612b9213d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.074516 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0798cb6a-03c0-467e-b65f-05612b9213d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.074822 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fht97\" (UniqueName: \"kubernetes.io/projected/0798cb6a-03c0-467e-b65f-05612b9213d3-kube-api-access-fht97\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.091896 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0798cb6a-03c0-467e-b65f-05612b9213d3\") " pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.173431 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.956670 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.958249 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.961400 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.961449 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-787dm" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.961743 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.961946 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 02 11:15:06 crc kubenswrapper[4925]: I0202 11:15:06.986325 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.069012 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.069392 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.069433 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.069466 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.069565 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.069676 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.069754 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-config\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.069869 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdcgl\" (UniqueName: \"kubernetes.io/projected/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-kube-api-access-pdcgl\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.171147 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.171195 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.171236 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.171273 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.171307 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.171330 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.171358 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-config\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.171406 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdcgl\" (UniqueName: \"kubernetes.io/projected/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-kube-api-access-pdcgl\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.171525 4925 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.171844 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.172682 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.172992 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-config\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.175699 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.176119 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.183893 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.189674 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdcgl\" (UniqueName: \"kubernetes.io/projected/ab8a8eaa-8f11-490e-9251-e4d34b8c481b-kube-api-access-pdcgl\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.193350 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ab8a8eaa-8f11-490e-9251-e4d34b8c481b\") " pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:07 crc kubenswrapper[4925]: I0202 11:15:07.275395 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 11:15:08 crc kubenswrapper[4925]: I0202 11:15:08.777059 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp" Feb 02 11:15:08 crc kubenswrapper[4925]: I0202 11:15:08.916118 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8a4ad92-ad81-471a-9495-15b9398f8eb4-secret-volume\") pod \"f8a4ad92-ad81-471a-9495-15b9398f8eb4\" (UID: \"f8a4ad92-ad81-471a-9495-15b9398f8eb4\") " Feb 02 11:15:08 crc kubenswrapper[4925]: I0202 11:15:08.916314 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnn67\" (UniqueName: \"kubernetes.io/projected/f8a4ad92-ad81-471a-9495-15b9398f8eb4-kube-api-access-qnn67\") pod \"f8a4ad92-ad81-471a-9495-15b9398f8eb4\" (UID: \"f8a4ad92-ad81-471a-9495-15b9398f8eb4\") " Feb 02 11:15:08 crc kubenswrapper[4925]: I0202 11:15:08.916386 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a4ad92-ad81-471a-9495-15b9398f8eb4-config-volume\") pod \"f8a4ad92-ad81-471a-9495-15b9398f8eb4\" (UID: \"f8a4ad92-ad81-471a-9495-15b9398f8eb4\") " Feb 02 11:15:08 crc kubenswrapper[4925]: I0202 11:15:08.917299 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8a4ad92-ad81-471a-9495-15b9398f8eb4-config-volume" (OuterVolumeSpecName: "config-volume") pod "f8a4ad92-ad81-471a-9495-15b9398f8eb4" (UID: "f8a4ad92-ad81-471a-9495-15b9398f8eb4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:15:08 crc kubenswrapper[4925]: I0202 11:15:08.921422 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a4ad92-ad81-471a-9495-15b9398f8eb4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f8a4ad92-ad81-471a-9495-15b9398f8eb4" (UID: "f8a4ad92-ad81-471a-9495-15b9398f8eb4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:15:08 crc kubenswrapper[4925]: I0202 11:15:08.922558 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8a4ad92-ad81-471a-9495-15b9398f8eb4-kube-api-access-qnn67" (OuterVolumeSpecName: "kube-api-access-qnn67") pod "f8a4ad92-ad81-471a-9495-15b9398f8eb4" (UID: "f8a4ad92-ad81-471a-9495-15b9398f8eb4"). InnerVolumeSpecName "kube-api-access-qnn67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:15:09 crc kubenswrapper[4925]: I0202 11:15:09.018258 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnn67\" (UniqueName: \"kubernetes.io/projected/f8a4ad92-ad81-471a-9495-15b9398f8eb4-kube-api-access-qnn67\") on node \"crc\" DevicePath \"\"" Feb 02 11:15:09 crc kubenswrapper[4925]: I0202 11:15:09.018285 4925 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a4ad92-ad81-471a-9495-15b9398f8eb4-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:15:09 crc kubenswrapper[4925]: I0202 11:15:09.018296 4925 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8a4ad92-ad81-471a-9495-15b9398f8eb4-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:15:09 crc kubenswrapper[4925]: I0202 11:15:09.122152 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp" event={"ID":"f8a4ad92-ad81-471a-9495-15b9398f8eb4","Type":"ContainerDied","Data":"acab0ed328537a4c45ab344626ad04a9187874c1967d3d05ba086c9983ec57e8"} Feb 02 11:15:09 crc kubenswrapper[4925]: I0202 11:15:09.122192 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acab0ed328537a4c45ab344626ad04a9187874c1967d3d05ba086c9983ec57e8" Feb 02 11:15:09 crc kubenswrapper[4925]: I0202 11:15:09.122224 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp" Feb 02 11:15:20 crc kubenswrapper[4925]: W0202 11:15:20.392294 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc60f25de_220a_4eb1_b1da_30faf1a27cf8.slice/crio-2042b6970a9af4f5fe8622e1e0bb61b7c17e1572187e7f1d61b9814bbcd8058e WatchSource:0}: Error finding container 2042b6970a9af4f5fe8622e1e0bb61b7c17e1572187e7f1d61b9814bbcd8058e: Status 404 returned error can't find the container with id 2042b6970a9af4f5fe8622e1e0bb61b7c17e1572187e7f1d61b9814bbcd8058e Feb 02 11:15:21 crc kubenswrapper[4925]: I0202 11:15:21.225771 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c60f25de-220a-4eb1-b1da-30faf1a27cf8","Type":"ContainerStarted","Data":"2042b6970a9af4f5fe8622e1e0bb61b7c17e1572187e7f1d61b9814bbcd8058e"} Feb 02 11:15:29 crc kubenswrapper[4925]: I0202 11:15:29.067928 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-26w5s"] Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.115938 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-r55bh"] Feb 02 11:15:38 crc kubenswrapper[4925]: E0202 11:15:38.116878 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8a4ad92-ad81-471a-9495-15b9398f8eb4" containerName="collect-profiles" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.116897 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8a4ad92-ad81-471a-9495-15b9398f8eb4" containerName="collect-profiles" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.117059 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8a4ad92-ad81-471a-9495-15b9398f8eb4" containerName="collect-profiles" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.117739 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.120729 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.128250 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-r55bh"] Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.219201 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/952fc6ba-02b5-4a94-90b4-2a206213f818-ovn-rundir\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.219284 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbgjr\" (UniqueName: \"kubernetes.io/projected/952fc6ba-02b5-4a94-90b4-2a206213f818-kube-api-access-dbgjr\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.219350 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952fc6ba-02b5-4a94-90b4-2a206213f818-combined-ca-bundle\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.219409 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/952fc6ba-02b5-4a94-90b4-2a206213f818-ovs-rundir\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.219466 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/952fc6ba-02b5-4a94-90b4-2a206213f818-config\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.219504 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/952fc6ba-02b5-4a94-90b4-2a206213f818-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.320886 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/952fc6ba-02b5-4a94-90b4-2a206213f818-ovn-rundir\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.320958 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbgjr\" (UniqueName: \"kubernetes.io/projected/952fc6ba-02b5-4a94-90b4-2a206213f818-kube-api-access-dbgjr\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.321016 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952fc6ba-02b5-4a94-90b4-2a206213f818-combined-ca-bundle\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.321059 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/952fc6ba-02b5-4a94-90b4-2a206213f818-ovs-rundir\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.321124 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/952fc6ba-02b5-4a94-90b4-2a206213f818-config\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.321168 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/952fc6ba-02b5-4a94-90b4-2a206213f818-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.321439 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/952fc6ba-02b5-4a94-90b4-2a206213f818-ovn-rundir\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.321533 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/952fc6ba-02b5-4a94-90b4-2a206213f818-ovs-rundir\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.322295 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/952fc6ba-02b5-4a94-90b4-2a206213f818-config\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.326991 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/952fc6ba-02b5-4a94-90b4-2a206213f818-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.328755 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/952fc6ba-02b5-4a94-90b4-2a206213f818-combined-ca-bundle\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.343430 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbgjr\" (UniqueName: \"kubernetes.io/projected/952fc6ba-02b5-4a94-90b4-2a206213f818-kube-api-access-dbgjr\") pod \"ovn-controller-metrics-r55bh\" (UID: \"952fc6ba-02b5-4a94-90b4-2a206213f818\") " pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:38 crc kubenswrapper[4925]: I0202 11:15:38.444256 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-r55bh" Feb 02 11:15:48 crc kubenswrapper[4925]: E0202 11:15:48.013344 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 02 11:15:48 crc kubenswrapper[4925]: E0202 11:15:48.014092 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9sbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(0d6b9691-80b3-418b-a4c7-fc80c0438123): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:15:48 crc kubenswrapper[4925]: E0202 11:15:48.015277 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="0d6b9691-80b3-418b-a4c7-fc80c0438123" Feb 02 11:15:48 crc kubenswrapper[4925]: E0202 11:15:48.445612 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="0d6b9691-80b3-418b-a4c7-fc80c0438123" Feb 02 11:15:57 crc kubenswrapper[4925]: I0202 11:15:57.506338 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-26w5s" event={"ID":"d118fb79-debc-4d5d-b390-38f913681237","Type":"ContainerStarted","Data":"54a48bf2b5ab95970ed372d5a31603d6d125b6745c6da62c87a287cce4dd7000"} Feb 02 11:16:05 crc kubenswrapper[4925]: E0202 11:16:05.841655 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 02 11:16:05 crc kubenswrapper[4925]: E0202 11:16:05.843625 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmhct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(d99509bd-1ed8-4516-8ed2-8d99b8e33c67): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:16:05 crc kubenswrapper[4925]: E0202 11:16:05.844833 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 02 11:16:05 crc kubenswrapper[4925]: E0202 11:16:05.845044 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8wsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(64d4545e-f93a-4767-bba7-d01bcaf43c4f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:16:05 crc kubenswrapper[4925]: E0202 11:16:05.845258 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="d99509bd-1ed8-4516-8ed2-8d99b8e33c67" Feb 02 11:16:05 crc kubenswrapper[4925]: E0202 11:16:05.846343 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="64d4545e-f93a-4767-bba7-d01bcaf43c4f" Feb 02 11:16:06 crc kubenswrapper[4925]: E0202 11:16:06.571479 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="64d4545e-f93a-4767-bba7-d01bcaf43c4f" Feb 02 11:16:06 crc kubenswrapper[4925]: E0202 11:16:06.571776 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="d99509bd-1ed8-4516-8ed2-8d99b8e33c67" Feb 02 11:16:12 crc kubenswrapper[4925]: E0202 11:16:12.252131 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 11:16:12 crc kubenswrapper[4925]: E0202 11:16:12.254200 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ck8pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-vpz4b_openstack(f41144d6-f6b8-4844-8cf4-31d25f69535e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:16:12 crc kubenswrapper[4925]: E0202 11:16:12.255379 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" podUID="f41144d6-f6b8-4844-8cf4-31d25f69535e" Feb 02 11:16:12 crc kubenswrapper[4925]: E0202 11:16:12.619313 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" podUID="f41144d6-f6b8-4844-8cf4-31d25f69535e" Feb 02 11:16:13 crc kubenswrapper[4925]: I0202 11:16:13.398832 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:16:13 crc kubenswrapper[4925]: I0202 11:16:13.399195 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:16:15 crc kubenswrapper[4925]: E0202 11:16:15.332177 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 11:16:15 crc kubenswrapper[4925]: E0202 11:16:15.332451 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b5b2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-9zjtb_openstack(13fa0562-8605-41e2-a10b-78cb1dae73a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:16:15 crc kubenswrapper[4925]: E0202 11:16:15.333748 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-9zjtb" podUID="13fa0562-8605-41e2-a10b-78cb1dae73a3" Feb 02 11:16:16 crc kubenswrapper[4925]: E0202 11:16:16.361153 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 11:16:16 crc kubenswrapper[4925]: E0202 11:16:16.361468 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66gln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-ksrb8_openstack(72b52ea4-726d-472a-9614-cfdfeff2508a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:16:16 crc kubenswrapper[4925]: E0202 11:16:16.363209 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-ksrb8" podUID="72b52ea4-726d-472a-9614-cfdfeff2508a" Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.654374 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ksrb8" event={"ID":"72b52ea4-726d-472a-9614-cfdfeff2508a","Type":"ContainerDied","Data":"c1938e64d95dbe65e0abe21393fcaf385d40790cf000ec1bd7ad20e1ebe489a0"} Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.654419 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1938e64d95dbe65e0abe21393fcaf385d40790cf000ec1bd7ad20e1ebe489a0" Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.656025 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9zjtb" event={"ID":"13fa0562-8605-41e2-a10b-78cb1dae73a3","Type":"ContainerDied","Data":"0e4ed60bb561001fa0e0015fb4eb1f58be563b7aee3d1763c28d4c5685bb1e8f"} Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.656064 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e4ed60bb561001fa0e0015fb4eb1f58be563b7aee3d1763c28d4c5685bb1e8f" Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.747746 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9zjtb" Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.772167 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ksrb8" Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.828767 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72b52ea4-726d-472a-9614-cfdfeff2508a-config\") pod \"72b52ea4-726d-472a-9614-cfdfeff2508a\" (UID: \"72b52ea4-726d-472a-9614-cfdfeff2508a\") " Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.828959 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5b2h\" (UniqueName: \"kubernetes.io/projected/13fa0562-8605-41e2-a10b-78cb1dae73a3-kube-api-access-b5b2h\") pod \"13fa0562-8605-41e2-a10b-78cb1dae73a3\" (UID: \"13fa0562-8605-41e2-a10b-78cb1dae73a3\") " Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.828989 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72b52ea4-726d-472a-9614-cfdfeff2508a-dns-svc\") pod \"72b52ea4-726d-472a-9614-cfdfeff2508a\" (UID: \"72b52ea4-726d-472a-9614-cfdfeff2508a\") " Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.829045 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fa0562-8605-41e2-a10b-78cb1dae73a3-config\") pod \"13fa0562-8605-41e2-a10b-78cb1dae73a3\" (UID: \"13fa0562-8605-41e2-a10b-78cb1dae73a3\") " Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.829250 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66gln\" (UniqueName: \"kubernetes.io/projected/72b52ea4-726d-472a-9614-cfdfeff2508a-kube-api-access-66gln\") pod \"72b52ea4-726d-472a-9614-cfdfeff2508a\" (UID: \"72b52ea4-726d-472a-9614-cfdfeff2508a\") " Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.830256 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72b52ea4-726d-472a-9614-cfdfeff2508a-config" (OuterVolumeSpecName: "config") pod "72b52ea4-726d-472a-9614-cfdfeff2508a" (UID: "72b52ea4-726d-472a-9614-cfdfeff2508a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.830395 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72b52ea4-726d-472a-9614-cfdfeff2508a-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.831023 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13fa0562-8605-41e2-a10b-78cb1dae73a3-config" (OuterVolumeSpecName: "config") pod "13fa0562-8605-41e2-a10b-78cb1dae73a3" (UID: "13fa0562-8605-41e2-a10b-78cb1dae73a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.831495 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72b52ea4-726d-472a-9614-cfdfeff2508a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "72b52ea4-726d-472a-9614-cfdfeff2508a" (UID: "72b52ea4-726d-472a-9614-cfdfeff2508a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.836421 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72b52ea4-726d-472a-9614-cfdfeff2508a-kube-api-access-66gln" (OuterVolumeSpecName: "kube-api-access-66gln") pod "72b52ea4-726d-472a-9614-cfdfeff2508a" (UID: "72b52ea4-726d-472a-9614-cfdfeff2508a"). InnerVolumeSpecName "kube-api-access-66gln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.837153 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13fa0562-8605-41e2-a10b-78cb1dae73a3-kube-api-access-b5b2h" (OuterVolumeSpecName: "kube-api-access-b5b2h") pod "13fa0562-8605-41e2-a10b-78cb1dae73a3" (UID: "13fa0562-8605-41e2-a10b-78cb1dae73a3"). InnerVolumeSpecName "kube-api-access-b5b2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:16:17 crc kubenswrapper[4925]: E0202 11:16:17.889120 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 11:16:17 crc kubenswrapper[4925]: E0202 11:16:17.889327 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njnwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-d6md8_openstack(f4232060-525a-4d70-8c74-1bc8d38330d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:16:17 crc kubenswrapper[4925]: E0202 11:16:17.890538 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-d6md8" podUID="f4232060-525a-4d70-8c74-1bc8d38330d3" Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.932257 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5b2h\" (UniqueName: \"kubernetes.io/projected/13fa0562-8605-41e2-a10b-78cb1dae73a3-kube-api-access-b5b2h\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.932306 4925 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72b52ea4-726d-472a-9614-cfdfeff2508a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.932319 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fa0562-8605-41e2-a10b-78cb1dae73a3-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:17 crc kubenswrapper[4925]: I0202 11:16:17.932328 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66gln\" (UniqueName: \"kubernetes.io/projected/72b52ea4-726d-472a-9614-cfdfeff2508a-kube-api-access-66gln\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:18 crc kubenswrapper[4925]: I0202 11:16:18.197265 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 11:16:18 crc kubenswrapper[4925]: W0202 11:16:18.204182 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0798cb6a_03c0_467e_b65f_05612b9213d3.slice/crio-999769b963e6a6cd0be553d9d05ca1e1b2430b294a3297eff35635d8cc507b30 WatchSource:0}: Error finding container 999769b963e6a6cd0be553d9d05ca1e1b2430b294a3297eff35635d8cc507b30: Status 404 returned error can't find the container with id 999769b963e6a6cd0be553d9d05ca1e1b2430b294a3297eff35635d8cc507b30 Feb 02 11:16:18 crc kubenswrapper[4925]: I0202 11:16:18.215390 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gr5rf"] Feb 02 11:16:18 crc kubenswrapper[4925]: I0202 11:16:18.229613 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-r55bh"] Feb 02 11:16:18 crc kubenswrapper[4925]: W0202 11:16:18.235483 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod952fc6ba_02b5_4a94_90b4_2a206213f818.slice/crio-84aaf48d4294eb309c082ab16afa9ab2e97621f49ff21735da0887aad27ad79d WatchSource:0}: Error finding container 84aaf48d4294eb309c082ab16afa9ab2e97621f49ff21735da0887aad27ad79d: Status 404 returned error can't find the container with id 84aaf48d4294eb309c082ab16afa9ab2e97621f49ff21735da0887aad27ad79d Feb 02 11:16:18 crc kubenswrapper[4925]: I0202 11:16:18.671576 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9zjtb" Feb 02 11:16:18 crc kubenswrapper[4925]: I0202 11:16:18.673500 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ksrb8" Feb 02 11:16:18 crc kubenswrapper[4925]: E0202 11:16:18.677053 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-d6md8" podUID="f4232060-525a-4d70-8c74-1bc8d38330d3" Feb 02 11:16:18 crc kubenswrapper[4925]: I0202 11:16:18.686731 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0798cb6a-03c0-467e-b65f-05612b9213d3","Type":"ContainerStarted","Data":"999769b963e6a6cd0be553d9d05ca1e1b2430b294a3297eff35635d8cc507b30"} Feb 02 11:16:18 crc kubenswrapper[4925]: I0202 11:16:18.686780 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gr5rf" event={"ID":"feb2b36a-609f-4805-8b50-fe0731522375","Type":"ContainerStarted","Data":"4cbb10dc14e44dabd08f5ee0149303d090f15486b615fe99af562c85987806dc"} Feb 02 11:16:18 crc kubenswrapper[4925]: I0202 11:16:18.687324 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-r55bh" event={"ID":"952fc6ba-02b5-4a94-90b4-2a206213f818","Type":"ContainerStarted","Data":"84aaf48d4294eb309c082ab16afa9ab2e97621f49ff21735da0887aad27ad79d"} Feb 02 11:16:18 crc kubenswrapper[4925]: I0202 11:16:18.770202 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9zjtb"] Feb 02 11:16:18 crc kubenswrapper[4925]: I0202 11:16:18.784764 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9zjtb"] Feb 02 11:16:18 crc kubenswrapper[4925]: I0202 11:16:18.797946 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ksrb8"] Feb 02 11:16:18 crc kubenswrapper[4925]: I0202 11:16:18.803985 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ksrb8"] Feb 02 11:16:19 crc kubenswrapper[4925]: I0202 11:16:19.118437 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 11:16:19 crc kubenswrapper[4925]: W0202 11:16:19.124726 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab8a8eaa_8f11_490e_9251_e4d34b8c481b.slice/crio-fc9c12c1d2261b94bcc9aae986dc13a387a969956a8c0db8dcd3e57748353e5d WatchSource:0}: Error finding container fc9c12c1d2261b94bcc9aae986dc13a387a969956a8c0db8dcd3e57748353e5d: Status 404 returned error can't find the container with id fc9c12c1d2261b94bcc9aae986dc13a387a969956a8c0db8dcd3e57748353e5d Feb 02 11:16:19 crc kubenswrapper[4925]: I0202 11:16:19.678639 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ab8a8eaa-8f11-490e-9251-e4d34b8c481b","Type":"ContainerStarted","Data":"fc9c12c1d2261b94bcc9aae986dc13a387a969956a8c0db8dcd3e57748353e5d"} Feb 02 11:16:20 crc kubenswrapper[4925]: I0202 11:16:20.675905 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13fa0562-8605-41e2-a10b-78cb1dae73a3" path="/var/lib/kubelet/pods/13fa0562-8605-41e2-a10b-78cb1dae73a3/volumes" Feb 02 11:16:20 crc kubenswrapper[4925]: I0202 11:16:20.676623 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72b52ea4-726d-472a-9614-cfdfeff2508a" path="/var/lib/kubelet/pods/72b52ea4-726d-472a-9614-cfdfeff2508a/volumes" Feb 02 11:16:21 crc kubenswrapper[4925]: E0202 11:16:21.234562 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 02 11:16:21 crc kubenswrapper[4925]: E0202 11:16:21.234733 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrhxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(435dc982-a475-4753-81d0-58bff20a6f17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:16:21 crc kubenswrapper[4925]: E0202 11:16:21.236339 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="435dc982-a475-4753-81d0-58bff20a6f17" Feb 02 11:16:21 crc kubenswrapper[4925]: E0202 11:16:21.237183 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 02 11:16:21 crc kubenswrapper[4925]: E0202 11:16:21.237422 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:ncdh599h5c8h78h599h5bfh665h67dh96h696h68fh5dh5bch559h5c6h96h668hf8h5fch648h676h649h68ch596h585h556hbdh6fh57fhd9h545h9bq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qhpgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(6bf67c7c-0e93-499e-9530-735520afac74): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:16:21 crc kubenswrapper[4925]: E0202 11:16:21.238873 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="6bf67c7c-0e93-499e-9530-735520afac74" Feb 02 11:16:21 crc kubenswrapper[4925]: E0202 11:16:21.703402 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="435dc982-a475-4753-81d0-58bff20a6f17" Feb 02 11:16:21 crc kubenswrapper[4925]: E0202 11:16:21.703652 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="6bf67c7c-0e93-499e-9530-735520afac74" Feb 02 11:16:26 crc kubenswrapper[4925]: E0202 11:16:26.368556 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 02 11:16:26 crc kubenswrapper[4925]: E0202 11:16:26.369599 4925 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 02 11:16:26 crc kubenswrapper[4925]: E0202 11:16:26.369827 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qxr2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(c60f25de-220a-4eb1-b1da-30faf1a27cf8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Feb 02 11:16:26 crc kubenswrapper[4925]: E0202 11:16:26.371709 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="c60f25de-220a-4eb1-b1da-30faf1a27cf8" Feb 02 11:16:27 crc kubenswrapper[4925]: E0202 11:16:27.021983 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="c60f25de-220a-4eb1-b1da-30faf1a27cf8" Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.787089 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ab8a8eaa-8f11-490e-9251-e4d34b8c481b","Type":"ContainerStarted","Data":"2defec6f6161d83f10448f9b7cbd83ea5d69bb61bf569b8d667292aa9e00dfd4"} Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.787706 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ab8a8eaa-8f11-490e-9251-e4d34b8c481b","Type":"ContainerStarted","Data":"fe12bff8f810f9079b930f9315223d40bef02808b780292210431bc5835c3660"} Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.789512 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-r55bh" event={"ID":"952fc6ba-02b5-4a94-90b4-2a206213f818","Type":"ContainerStarted","Data":"56b7ca8a004f97bb6de535beb9f8b64e954a52aaa8240e14959fff079b8a0dfb"} Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.793550 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0798cb6a-03c0-467e-b65f-05612b9213d3","Type":"ContainerStarted","Data":"be5091e42eac28d0af0cd8bae16d948c86d29d787ab81e7160670b4b1c4dc253"} Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.793631 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0798cb6a-03c0-467e-b65f-05612b9213d3","Type":"ContainerStarted","Data":"fbc247f07356e5edf42b2997fb998ca74f4a4a64c2b13608abb463c20a6e2e2d"} Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.795133 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"64d4545e-f93a-4767-bba7-d01bcaf43c4f","Type":"ContainerStarted","Data":"7673430229c4c2fe8af301b58ee4833226cd3cd630c86b9d7e6090af3498ec8d"} Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.796958 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0d6b9691-80b3-418b-a4c7-fc80c0438123","Type":"ContainerStarted","Data":"4b576d38c799222ec6a0ca164153d6ae54494ef632ee6de450da84c205e09e73"} Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.798810 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gr5rf" event={"ID":"feb2b36a-609f-4805-8b50-fe0731522375","Type":"ContainerStarted","Data":"ae6e7e3f8339d695aa266de841a03ff4ad8e8e3ca1443e40165cf8a8ea28b27c"} Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.798893 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-gr5rf" Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.801693 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d99509bd-1ed8-4516-8ed2-8d99b8e33c67","Type":"ContainerStarted","Data":"2ec71c3b1a9ed8e8329d8bbf8d9bfd9449404b53e40205c1254919b0afc058d9"} Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.803996 4925 generic.go:334] "Generic (PLEG): container finished" podID="f41144d6-f6b8-4844-8cf4-31d25f69535e" containerID="50096e6bc6c18e5451a51a63f41e4bc2203b21631f507cd9005f5e36babd5ec5" exitCode=0 Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.804108 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" event={"ID":"f41144d6-f6b8-4844-8cf4-31d25f69535e","Type":"ContainerDied","Data":"50096e6bc6c18e5451a51a63f41e4bc2203b21631f507cd9005f5e36babd5ec5"} Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.807227 4925 generic.go:334] "Generic (PLEG): container finished" podID="d118fb79-debc-4d5d-b390-38f913681237" containerID="4f34c03d745ed47591a3486a9329ebcc59ccaa5b8fd62ca576f0875e72e4d5ef" exitCode=0 Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.807289 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-26w5s" event={"ID":"d118fb79-debc-4d5d-b390-38f913681237","Type":"ContainerDied","Data":"4f34c03d745ed47591a3486a9329ebcc59ccaa5b8fd62ca576f0875e72e4d5ef"} Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.825482 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=75.389026796 podStartE2EDuration="1m27.825446332s" podCreationTimestamp="2026-02-02 11:15:05 +0000 UTC" firstStartedPulling="2026-02-02 11:16:19.126426528 +0000 UTC m=+1156.130675490" lastFinishedPulling="2026-02-02 11:16:31.562846064 +0000 UTC m=+1168.567095026" observedRunningTime="2026-02-02 11:16:32.811371767 +0000 UTC m=+1169.815620739" watchObservedRunningTime="2026-02-02 11:16:32.825446332 +0000 UTC m=+1169.829695294" Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.864929 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-r55bh" podStartSLOduration=41.843101473 podStartE2EDuration="54.864900694s" podCreationTimestamp="2026-02-02 11:15:38 +0000 UTC" firstStartedPulling="2026-02-02 11:16:18.238463043 +0000 UTC m=+1155.242712015" lastFinishedPulling="2026-02-02 11:16:31.260262274 +0000 UTC m=+1168.264511236" observedRunningTime="2026-02-02 11:16:32.855980596 +0000 UTC m=+1169.860229568" watchObservedRunningTime="2026-02-02 11:16:32.864900694 +0000 UTC m=+1169.869149656" Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.889514 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=75.535164071 podStartE2EDuration="1m28.889498241s" podCreationTimestamp="2026-02-02 11:15:04 +0000 UTC" firstStartedPulling="2026-02-02 11:16:18.207026724 +0000 UTC m=+1155.211275696" lastFinishedPulling="2026-02-02 11:16:31.561360904 +0000 UTC m=+1168.565609866" observedRunningTime="2026-02-02 11:16:32.885962586 +0000 UTC m=+1169.890211548" watchObservedRunningTime="2026-02-02 11:16:32.889498241 +0000 UTC m=+1169.893747203" Feb 02 11:16:32 crc kubenswrapper[4925]: I0202 11:16:32.998802 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gr5rf" podStartSLOduration=74.657566505 podStartE2EDuration="1m27.998740714s" podCreationTimestamp="2026-02-02 11:15:05 +0000 UTC" firstStartedPulling="2026-02-02 11:16:18.220663158 +0000 UTC m=+1155.224912140" lastFinishedPulling="2026-02-02 11:16:31.561837387 +0000 UTC m=+1168.566086349" observedRunningTime="2026-02-02 11:16:32.962498928 +0000 UTC m=+1169.966747900" watchObservedRunningTime="2026-02-02 11:16:32.998740714 +0000 UTC m=+1170.002989676" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.127602 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vpz4b"] Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.168871 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-gkjgn"] Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.171846 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.175956 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.176246 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.182551 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-gkjgn"] Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.330247 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-config\") pod \"dnsmasq-dns-7fd796d7df-gkjgn\" (UID: \"6b40d9ff-7cf3-4538-9667-10a25844d630\") " pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.330586 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-gkjgn\" (UID: \"6b40d9ff-7cf3-4538-9667-10a25844d630\") " pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.330670 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k8hv\" (UniqueName: \"kubernetes.io/projected/6b40d9ff-7cf3-4538-9667-10a25844d630-kube-api-access-5k8hv\") pod \"dnsmasq-dns-7fd796d7df-gkjgn\" (UID: \"6b40d9ff-7cf3-4538-9667-10a25844d630\") " pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.330717 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-gkjgn\" (UID: \"6b40d9ff-7cf3-4538-9667-10a25844d630\") " pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.413662 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-d6md8"] Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.434853 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-gkjgn\" (UID: \"6b40d9ff-7cf3-4538-9667-10a25844d630\") " pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.434943 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-config\") pod \"dnsmasq-dns-7fd796d7df-gkjgn\" (UID: \"6b40d9ff-7cf3-4538-9667-10a25844d630\") " pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.434974 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-gkjgn\" (UID: \"6b40d9ff-7cf3-4538-9667-10a25844d630\") " pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.435071 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k8hv\" (UniqueName: \"kubernetes.io/projected/6b40d9ff-7cf3-4538-9667-10a25844d630-kube-api-access-5k8hv\") pod \"dnsmasq-dns-7fd796d7df-gkjgn\" (UID: \"6b40d9ff-7cf3-4538-9667-10a25844d630\") " pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.436278 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-gkjgn\" (UID: \"6b40d9ff-7cf3-4538-9667-10a25844d630\") " pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.436351 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-gkjgn\" (UID: \"6b40d9ff-7cf3-4538-9667-10a25844d630\") " pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.436425 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-config\") pod \"dnsmasq-dns-7fd796d7df-gkjgn\" (UID: \"6b40d9ff-7cf3-4538-9667-10a25844d630\") " pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.449325 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-f6t4k"] Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.450889 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.459787 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.468822 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-f6t4k"] Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.546789 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k8hv\" (UniqueName: \"kubernetes.io/projected/6b40d9ff-7cf3-4538-9667-10a25844d630-kube-api-access-5k8hv\") pod \"dnsmasq-dns-7fd796d7df-gkjgn\" (UID: \"6b40d9ff-7cf3-4538-9667-10a25844d630\") " pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.646019 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-f6t4k\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.646941 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs57g\" (UniqueName: \"kubernetes.io/projected/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-kube-api-access-zs57g\") pod \"dnsmasq-dns-86db49b7ff-f6t4k\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.647011 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-f6t4k\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.647039 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-f6t4k\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.647065 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-config\") pod \"dnsmasq-dns-86db49b7ff-f6t4k\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.748722 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs57g\" (UniqueName: \"kubernetes.io/projected/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-kube-api-access-zs57g\") pod \"dnsmasq-dns-86db49b7ff-f6t4k\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.748817 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-f6t4k\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.748859 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-f6t4k\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.748889 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-config\") pod \"dnsmasq-dns-86db49b7ff-f6t4k\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.748944 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-f6t4k\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.750208 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-f6t4k\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.750994 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-f6t4k\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.753801 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-config\") pod \"dnsmasq-dns-86db49b7ff-f6t4k\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.754823 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-f6t4k\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.792546 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.819005 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" podUID="f41144d6-f6b8-4844-8cf4-31d25f69535e" containerName="dnsmasq-dns" containerID="cri-o://6a866c6649cafdcdcf089b1736e067dbdb1870ef63f294e6c8c53c1c238f6c5d" gracePeriod=10 Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.818921 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" event={"ID":"f41144d6-f6b8-4844-8cf4-31d25f69535e","Type":"ContainerStarted","Data":"6a866c6649cafdcdcf089b1736e067dbdb1870ef63f294e6c8c53c1c238f6c5d"} Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.822680 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.824242 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-26w5s" event={"ID":"d118fb79-debc-4d5d-b390-38f913681237","Type":"ContainerStarted","Data":"4d39643a62d0a187f046d2f17a80df4896d37170435720cea685d52509a7962e"} Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.841673 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs57g\" (UniqueName: \"kubernetes.io/projected/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-kube-api-access-zs57g\") pod \"dnsmasq-dns-86db49b7ff-f6t4k\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.843935 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" podStartSLOduration=4.1674215199999995 podStartE2EDuration="1m39.843913179s" podCreationTimestamp="2026-02-02 11:14:54 +0000 UTC" firstStartedPulling="2026-02-02 11:14:55.885308897 +0000 UTC m=+1072.889557859" lastFinishedPulling="2026-02-02 11:16:31.561800566 +0000 UTC m=+1168.566049518" observedRunningTime="2026-02-02 11:16:33.836935342 +0000 UTC m=+1170.841184304" watchObservedRunningTime="2026-02-02 11:16:33.843913179 +0000 UTC m=+1170.848162151" Feb 02 11:16:33 crc kubenswrapper[4925]: I0202 11:16:33.995577 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-d6md8" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.069237 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.154667 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4232060-525a-4d70-8c74-1bc8d38330d3-config\") pod \"f4232060-525a-4d70-8c74-1bc8d38330d3\" (UID: \"f4232060-525a-4d70-8c74-1bc8d38330d3\") " Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.155063 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4232060-525a-4d70-8c74-1bc8d38330d3-dns-svc\") pod \"f4232060-525a-4d70-8c74-1bc8d38330d3\" (UID: \"f4232060-525a-4d70-8c74-1bc8d38330d3\") " Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.155313 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njnwf\" (UniqueName: \"kubernetes.io/projected/f4232060-525a-4d70-8c74-1bc8d38330d3-kube-api-access-njnwf\") pod \"f4232060-525a-4d70-8c74-1bc8d38330d3\" (UID: \"f4232060-525a-4d70-8c74-1bc8d38330d3\") " Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.155325 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4232060-525a-4d70-8c74-1bc8d38330d3-config" (OuterVolumeSpecName: "config") pod "f4232060-525a-4d70-8c74-1bc8d38330d3" (UID: "f4232060-525a-4d70-8c74-1bc8d38330d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.157942 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4232060-525a-4d70-8c74-1bc8d38330d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4232060-525a-4d70-8c74-1bc8d38330d3" (UID: "f4232060-525a-4d70-8c74-1bc8d38330d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.163653 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4232060-525a-4d70-8c74-1bc8d38330d3-kube-api-access-njnwf" (OuterVolumeSpecName: "kube-api-access-njnwf") pod "f4232060-525a-4d70-8c74-1bc8d38330d3" (UID: "f4232060-525a-4d70-8c74-1bc8d38330d3"). InnerVolumeSpecName "kube-api-access-njnwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.256894 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4232060-525a-4d70-8c74-1bc8d38330d3-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.256940 4925 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4232060-525a-4d70-8c74-1bc8d38330d3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.256952 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njnwf\" (UniqueName: \"kubernetes.io/projected/f4232060-525a-4d70-8c74-1bc8d38330d3-kube-api-access-njnwf\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.276451 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.340276 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.399416 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-gkjgn"] Feb 02 11:16:34 crc kubenswrapper[4925]: W0202 11:16:34.400862 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b40d9ff_7cf3_4538_9667_10a25844d630.slice/crio-fafc947d694b7d2061f474c732ef2cbfe253899b66d860b825ee21d7cacd366a WatchSource:0}: Error finding container fafc947d694b7d2061f474c732ef2cbfe253899b66d860b825ee21d7cacd366a: Status 404 returned error can't find the container with id fafc947d694b7d2061f474c732ef2cbfe253899b66d860b825ee21d7cacd366a Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.456465 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.560468 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f41144d6-f6b8-4844-8cf4-31d25f69535e-config\") pod \"f41144d6-f6b8-4844-8cf4-31d25f69535e\" (UID: \"f41144d6-f6b8-4844-8cf4-31d25f69535e\") " Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.560615 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck8pk\" (UniqueName: \"kubernetes.io/projected/f41144d6-f6b8-4844-8cf4-31d25f69535e-kube-api-access-ck8pk\") pod \"f41144d6-f6b8-4844-8cf4-31d25f69535e\" (UID: \"f41144d6-f6b8-4844-8cf4-31d25f69535e\") " Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.560670 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f41144d6-f6b8-4844-8cf4-31d25f69535e-dns-svc\") pod \"f41144d6-f6b8-4844-8cf4-31d25f69535e\" (UID: \"f41144d6-f6b8-4844-8cf4-31d25f69535e\") " Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.564235 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f41144d6-f6b8-4844-8cf4-31d25f69535e-kube-api-access-ck8pk" (OuterVolumeSpecName: "kube-api-access-ck8pk") pod "f41144d6-f6b8-4844-8cf4-31d25f69535e" (UID: "f41144d6-f6b8-4844-8cf4-31d25f69535e"). InnerVolumeSpecName "kube-api-access-ck8pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.579188 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-f6t4k"] Feb 02 11:16:34 crc kubenswrapper[4925]: W0202 11:16:34.583465 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b31fe1a_24c3_4692_bf53_f2a6c63c8444.slice/crio-35f24accee2c869132c88ea243c51994767a800d9dbbf1d123e982a555190ccb WatchSource:0}: Error finding container 35f24accee2c869132c88ea243c51994767a800d9dbbf1d123e982a555190ccb: Status 404 returned error can't find the container with id 35f24accee2c869132c88ea243c51994767a800d9dbbf1d123e982a555190ccb Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.600591 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f41144d6-f6b8-4844-8cf4-31d25f69535e-config" (OuterVolumeSpecName: "config") pod "f41144d6-f6b8-4844-8cf4-31d25f69535e" (UID: "f41144d6-f6b8-4844-8cf4-31d25f69535e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.604906 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f41144d6-f6b8-4844-8cf4-31d25f69535e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f41144d6-f6b8-4844-8cf4-31d25f69535e" (UID: "f41144d6-f6b8-4844-8cf4-31d25f69535e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.662462 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f41144d6-f6b8-4844-8cf4-31d25f69535e-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.662523 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck8pk\" (UniqueName: \"kubernetes.io/projected/f41144d6-f6b8-4844-8cf4-31d25f69535e-kube-api-access-ck8pk\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.662541 4925 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f41144d6-f6b8-4844-8cf4-31d25f69535e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.838287 4925 generic.go:334] "Generic (PLEG): container finished" podID="3b31fe1a-24c3-4692-bf53-f2a6c63c8444" containerID="b7b93ab4c3abb5a23a756b45ee4b9b7e55152b8388d782444d1e2229db24ce33" exitCode=0 Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.838370 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" event={"ID":"3b31fe1a-24c3-4692-bf53-f2a6c63c8444","Type":"ContainerDied","Data":"b7b93ab4c3abb5a23a756b45ee4b9b7e55152b8388d782444d1e2229db24ce33"} Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.838396 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" event={"ID":"3b31fe1a-24c3-4692-bf53-f2a6c63c8444","Type":"ContainerStarted","Data":"35f24accee2c869132c88ea243c51994767a800d9dbbf1d123e982a555190ccb"} Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.844358 4925 generic.go:334] "Generic (PLEG): container finished" podID="f41144d6-f6b8-4844-8cf4-31d25f69535e" containerID="6a866c6649cafdcdcf089b1736e067dbdb1870ef63f294e6c8c53c1c238f6c5d" exitCode=0 Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.844497 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.845229 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" event={"ID":"f41144d6-f6b8-4844-8cf4-31d25f69535e","Type":"ContainerDied","Data":"6a866c6649cafdcdcf089b1736e067dbdb1870ef63f294e6c8c53c1c238f6c5d"} Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.845275 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vpz4b" event={"ID":"f41144d6-f6b8-4844-8cf4-31d25f69535e","Type":"ContainerDied","Data":"79294bfc69a332648f95433d9a9f41330969bd94a822a4164bbb37a4403ef10a"} Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.845305 4925 scope.go:117] "RemoveContainer" containerID="6a866c6649cafdcdcf089b1736e067dbdb1870ef63f294e6c8c53c1c238f6c5d" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.847726 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"435dc982-a475-4753-81d0-58bff20a6f17","Type":"ContainerStarted","Data":"34fc9e387138d7a2caf08e6dd7e3e70fa91f331ed160a0b917b941363bbff6f5"} Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.851761 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-26w5s" event={"ID":"d118fb79-debc-4d5d-b390-38f913681237","Type":"ContainerStarted","Data":"0e61eb68ab7e26d7f411e89fa012f517a8cf6e455780bcaee538693b101933ae"} Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.852880 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.852937 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.854536 4925 generic.go:334] "Generic (PLEG): container finished" podID="6b40d9ff-7cf3-4538-9667-10a25844d630" containerID="82f5ad9068dfa43e3dd4c2de402c62231b71caba89fdd536b4ace623bb40db04" exitCode=0 Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.854581 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" event={"ID":"6b40d9ff-7cf3-4538-9667-10a25844d630","Type":"ContainerDied","Data":"82f5ad9068dfa43e3dd4c2de402c62231b71caba89fdd536b4ace623bb40db04"} Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.854598 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" event={"ID":"6b40d9ff-7cf3-4538-9667-10a25844d630","Type":"ContainerStarted","Data":"fafc947d694b7d2061f474c732ef2cbfe253899b66d860b825ee21d7cacd366a"} Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.864657 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-d6md8" event={"ID":"f4232060-525a-4d70-8c74-1bc8d38330d3","Type":"ContainerDied","Data":"76465ba62fe01251b3c5c3c692de0ccbe0801a25cc30e76798145faf9b1aa4a6"} Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.864767 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-d6md8" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.866400 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6bf67c7c-0e93-499e-9530-735520afac74","Type":"ContainerStarted","Data":"2c39e8f05869b253f3725dd00bf63afa49d07a2f07a57200c67c2733b6af3ad8"} Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.866434 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.866590 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.871542 4925 scope.go:117] "RemoveContainer" containerID="50096e6bc6c18e5451a51a63f41e4bc2203b21631f507cd9005f5e36babd5ec5" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.907685 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vpz4b"] Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.916599 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vpz4b"] Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.917110 4925 scope.go:117] "RemoveContainer" containerID="6a866c6649cafdcdcf089b1736e067dbdb1870ef63f294e6c8c53c1c238f6c5d" Feb 02 11:16:34 crc kubenswrapper[4925]: E0202 11:16:34.918170 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a866c6649cafdcdcf089b1736e067dbdb1870ef63f294e6c8c53c1c238f6c5d\": container with ID starting with 6a866c6649cafdcdcf089b1736e067dbdb1870ef63f294e6c8c53c1c238f6c5d not found: ID does not exist" containerID="6a866c6649cafdcdcf089b1736e067dbdb1870ef63f294e6c8c53c1c238f6c5d" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.918229 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a866c6649cafdcdcf089b1736e067dbdb1870ef63f294e6c8c53c1c238f6c5d"} err="failed to get container status \"6a866c6649cafdcdcf089b1736e067dbdb1870ef63f294e6c8c53c1c238f6c5d\": rpc error: code = NotFound desc = could not find container \"6a866c6649cafdcdcf089b1736e067dbdb1870ef63f294e6c8c53c1c238f6c5d\": container with ID starting with 6a866c6649cafdcdcf089b1736e067dbdb1870ef63f294e6c8c53c1c238f6c5d not found: ID does not exist" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.918257 4925 scope.go:117] "RemoveContainer" containerID="50096e6bc6c18e5451a51a63f41e4bc2203b21631f507cd9005f5e36babd5ec5" Feb 02 11:16:34 crc kubenswrapper[4925]: E0202 11:16:34.918557 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50096e6bc6c18e5451a51a63f41e4bc2203b21631f507cd9005f5e36babd5ec5\": container with ID starting with 50096e6bc6c18e5451a51a63f41e4bc2203b21631f507cd9005f5e36babd5ec5 not found: ID does not exist" containerID="50096e6bc6c18e5451a51a63f41e4bc2203b21631f507cd9005f5e36babd5ec5" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.918594 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50096e6bc6c18e5451a51a63f41e4bc2203b21631f507cd9005f5e36babd5ec5"} err="failed to get container status \"50096e6bc6c18e5451a51a63f41e4bc2203b21631f507cd9005f5e36babd5ec5\": rpc error: code = NotFound desc = could not find container \"50096e6bc6c18e5451a51a63f41e4bc2203b21631f507cd9005f5e36babd5ec5\": container with ID starting with 50096e6bc6c18e5451a51a63f41e4bc2203b21631f507cd9005f5e36babd5ec5 not found: ID does not exist" Feb 02 11:16:34 crc kubenswrapper[4925]: I0202 11:16:34.959094 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-26w5s" podStartSLOduration=55.615489253 podStartE2EDuration="1m29.959053914s" podCreationTimestamp="2026-02-02 11:15:05 +0000 UTC" firstStartedPulling="2026-02-02 11:15:57.200820351 +0000 UTC m=+1134.205069313" lastFinishedPulling="2026-02-02 11:16:31.544385012 +0000 UTC m=+1168.548633974" observedRunningTime="2026-02-02 11:16:34.958032027 +0000 UTC m=+1171.962280999" watchObservedRunningTime="2026-02-02 11:16:34.959053914 +0000 UTC m=+1171.963302876" Feb 02 11:16:35 crc kubenswrapper[4925]: I0202 11:16:35.019009 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-d6md8"] Feb 02 11:16:35 crc kubenswrapper[4925]: I0202 11:16:35.032165 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-d6md8"] Feb 02 11:16:35 crc kubenswrapper[4925]: I0202 11:16:35.040614 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.585792794 podStartE2EDuration="1m36.040592199s" podCreationTimestamp="2026-02-02 11:14:59 +0000 UTC" firstStartedPulling="2026-02-02 11:15:00.515465764 +0000 UTC m=+1077.519714726" lastFinishedPulling="2026-02-02 11:16:33.970265169 +0000 UTC m=+1170.974514131" observedRunningTime="2026-02-02 11:16:35.013061534 +0000 UTC m=+1172.017310496" watchObservedRunningTime="2026-02-02 11:16:35.040592199 +0000 UTC m=+1172.044841161" Feb 02 11:16:35 crc kubenswrapper[4925]: I0202 11:16:35.877362 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" event={"ID":"3b31fe1a-24c3-4692-bf53-f2a6c63c8444","Type":"ContainerStarted","Data":"cf54e11f7337d7a45e3e1be1098802c0309d9080bca23a51b31f024b19a1a0c3"} Feb 02 11:16:35 crc kubenswrapper[4925]: I0202 11:16:35.878483 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:35 crc kubenswrapper[4925]: I0202 11:16:35.881352 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" event={"ID":"6b40d9ff-7cf3-4538-9667-10a25844d630","Type":"ContainerStarted","Data":"4d36ce70d735e148575b6ca3c0f05b51ea07c30f6698160f6024a4a1b8c8e3d8"} Feb 02 11:16:35 crc kubenswrapper[4925]: I0202 11:16:35.882183 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:35 crc kubenswrapper[4925]: I0202 11:16:35.904492 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" podStartSLOduration=2.904469771 podStartE2EDuration="2.904469771s" podCreationTimestamp="2026-02-02 11:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:16:35.896628742 +0000 UTC m=+1172.900877714" watchObservedRunningTime="2026-02-02 11:16:35.904469771 +0000 UTC m=+1172.908718733" Feb 02 11:16:35 crc kubenswrapper[4925]: I0202 11:16:35.920717 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" podStartSLOduration=2.920699894 podStartE2EDuration="2.920699894s" podCreationTimestamp="2026-02-02 11:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:16:35.91792083 +0000 UTC m=+1172.922169792" watchObservedRunningTime="2026-02-02 11:16:35.920699894 +0000 UTC m=+1172.924948856" Feb 02 11:16:36 crc kubenswrapper[4925]: I0202 11:16:36.174616 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 02 11:16:36 crc kubenswrapper[4925]: I0202 11:16:36.212760 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 02 11:16:36 crc kubenswrapper[4925]: I0202 11:16:36.676516 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f41144d6-f6b8-4844-8cf4-31d25f69535e" path="/var/lib/kubelet/pods/f41144d6-f6b8-4844-8cf4-31d25f69535e/volumes" Feb 02 11:16:36 crc kubenswrapper[4925]: I0202 11:16:36.677275 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4232060-525a-4d70-8c74-1bc8d38330d3" path="/var/lib/kubelet/pods/f4232060-525a-4d70-8c74-1bc8d38330d3/volumes" Feb 02 11:16:36 crc kubenswrapper[4925]: I0202 11:16:36.941994 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.317160 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.501191 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 02 11:16:37 crc kubenswrapper[4925]: E0202 11:16:37.501513 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41144d6-f6b8-4844-8cf4-31d25f69535e" containerName="init" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.501536 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41144d6-f6b8-4844-8cf4-31d25f69535e" containerName="init" Feb 02 11:16:37 crc kubenswrapper[4925]: E0202 11:16:37.501559 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41144d6-f6b8-4844-8cf4-31d25f69535e" containerName="dnsmasq-dns" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.501566 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41144d6-f6b8-4844-8cf4-31d25f69535e" containerName="dnsmasq-dns" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.501715 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="f41144d6-f6b8-4844-8cf4-31d25f69535e" containerName="dnsmasq-dns" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.502584 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.504397 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-q225w" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.504559 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.504768 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.506217 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.519713 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.620222 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b370a7b-282f-4481-8275-39c981b54f35-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.620472 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b370a7b-282f-4481-8275-39c981b54f35-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.620556 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b370a7b-282f-4481-8275-39c981b54f35-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.620649 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26czk\" (UniqueName: \"kubernetes.io/projected/3b370a7b-282f-4481-8275-39c981b54f35-kube-api-access-26czk\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.620918 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b370a7b-282f-4481-8275-39c981b54f35-scripts\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.621046 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b370a7b-282f-4481-8275-39c981b54f35-config\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.621146 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b370a7b-282f-4481-8275-39c981b54f35-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.722761 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b370a7b-282f-4481-8275-39c981b54f35-config\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.722822 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b370a7b-282f-4481-8275-39c981b54f35-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.722891 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b370a7b-282f-4481-8275-39c981b54f35-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.722918 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b370a7b-282f-4481-8275-39c981b54f35-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.722956 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b370a7b-282f-4481-8275-39c981b54f35-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.722996 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26czk\" (UniqueName: \"kubernetes.io/projected/3b370a7b-282f-4481-8275-39c981b54f35-kube-api-access-26czk\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.723170 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b370a7b-282f-4481-8275-39c981b54f35-scripts\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.723447 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b370a7b-282f-4481-8275-39c981b54f35-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.723725 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b370a7b-282f-4481-8275-39c981b54f35-config\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.724343 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b370a7b-282f-4481-8275-39c981b54f35-scripts\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.729660 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b370a7b-282f-4481-8275-39c981b54f35-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.733973 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b370a7b-282f-4481-8275-39c981b54f35-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.737447 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b370a7b-282f-4481-8275-39c981b54f35-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.743414 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26czk\" (UniqueName: \"kubernetes.io/projected/3b370a7b-282f-4481-8275-39c981b54f35-kube-api-access-26czk\") pod \"ovn-northd-0\" (UID: \"3b370a7b-282f-4481-8275-39c981b54f35\") " pod="openstack/ovn-northd-0" Feb 02 11:16:37 crc kubenswrapper[4925]: I0202 11:16:37.820412 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 11:16:38 crc kubenswrapper[4925]: I0202 11:16:38.288031 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 11:16:38 crc kubenswrapper[4925]: I0202 11:16:38.923163 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3b370a7b-282f-4481-8275-39c981b54f35","Type":"ContainerStarted","Data":"3a6aa7c2a0e839601ceb59cca71cd21b14de292dab7048b8dea8dbe8e799dcbb"} Feb 02 11:16:39 crc kubenswrapper[4925]: I0202 11:16:39.873819 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 02 11:16:39 crc kubenswrapper[4925]: I0202 11:16:39.939748 4925 generic.go:334] "Generic (PLEG): container finished" podID="d99509bd-1ed8-4516-8ed2-8d99b8e33c67" containerID="2ec71c3b1a9ed8e8329d8bbf8d9bfd9449404b53e40205c1254919b0afc058d9" exitCode=0 Feb 02 11:16:39 crc kubenswrapper[4925]: I0202 11:16:39.939792 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d99509bd-1ed8-4516-8ed2-8d99b8e33c67","Type":"ContainerDied","Data":"2ec71c3b1a9ed8e8329d8bbf8d9bfd9449404b53e40205c1254919b0afc058d9"} Feb 02 11:16:39 crc kubenswrapper[4925]: I0202 11:16:39.943377 4925 generic.go:334] "Generic (PLEG): container finished" podID="64d4545e-f93a-4767-bba7-d01bcaf43c4f" containerID="7673430229c4c2fe8af301b58ee4833226cd3cd630c86b9d7e6090af3498ec8d" exitCode=0 Feb 02 11:16:39 crc kubenswrapper[4925]: I0202 11:16:39.943435 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"64d4545e-f93a-4767-bba7-d01bcaf43c4f","Type":"ContainerDied","Data":"7673430229c4c2fe8af301b58ee4833226cd3cd630c86b9d7e6090af3498ec8d"} Feb 02 11:16:40 crc kubenswrapper[4925]: I0202 11:16:40.951156 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d99509bd-1ed8-4516-8ed2-8d99b8e33c67","Type":"ContainerStarted","Data":"9efb66894d8b8b55c6dcd9cb8b0fbb766e3d254778103c0f384e52cdf8034176"} Feb 02 11:16:40 crc kubenswrapper[4925]: I0202 11:16:40.953910 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"64d4545e-f93a-4767-bba7-d01bcaf43c4f","Type":"ContainerStarted","Data":"489c4e3117049240789e93796290bddba793a184ce0c3fd44b9921639edb4c00"} Feb 02 11:16:40 crc kubenswrapper[4925]: I0202 11:16:40.980184 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=12.224916239 podStartE2EDuration="1m44.980160949s" podCreationTimestamp="2026-02-02 11:14:56 +0000 UTC" firstStartedPulling="2026-02-02 11:14:58.504994643 +0000 UTC m=+1075.509243605" lastFinishedPulling="2026-02-02 11:16:31.260239353 +0000 UTC m=+1168.264488315" observedRunningTime="2026-02-02 11:16:40.972899595 +0000 UTC m=+1177.977148567" watchObservedRunningTime="2026-02-02 11:16:40.980160949 +0000 UTC m=+1177.984409911" Feb 02 11:16:41 crc kubenswrapper[4925]: I0202 11:16:41.000780 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=13.357496403 podStartE2EDuration="1m43.000759449s" podCreationTimestamp="2026-02-02 11:14:58 +0000 UTC" firstStartedPulling="2026-02-02 11:15:00.070532518 +0000 UTC m=+1077.074781480" lastFinishedPulling="2026-02-02 11:16:29.713795564 +0000 UTC m=+1166.718044526" observedRunningTime="2026-02-02 11:16:40.991715517 +0000 UTC m=+1177.995964479" watchObservedRunningTime="2026-02-02 11:16:41.000759449 +0000 UTC m=+1178.005008411" Feb 02 11:16:43 crc kubenswrapper[4925]: I0202 11:16:43.398561 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:16:43 crc kubenswrapper[4925]: I0202 11:16:43.399155 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:16:43 crc kubenswrapper[4925]: I0202 11:16:43.794553 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:44.070298 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:44.145324 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-gkjgn"] Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:44.148800 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" podUID="6b40d9ff-7cf3-4538-9667-10a25844d630" containerName="dnsmasq-dns" containerID="cri-o://4d36ce70d735e148575b6ca3c0f05b51ea07c30f6698160f6024a4a1b8c8e3d8" gracePeriod=10 Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:44.989622 4925 generic.go:334] "Generic (PLEG): container finished" podID="6b40d9ff-7cf3-4538-9667-10a25844d630" containerID="4d36ce70d735e148575b6ca3c0f05b51ea07c30f6698160f6024a4a1b8c8e3d8" exitCode=0 Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:44.989715 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" event={"ID":"6b40d9ff-7cf3-4538-9667-10a25844d630","Type":"ContainerDied","Data":"4d36ce70d735e148575b6ca3c0f05b51ea07c30f6698160f6024a4a1b8c8e3d8"} Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:45.605150 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:45.654329 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-ovsdbserver-nb\") pod \"6b40d9ff-7cf3-4538-9667-10a25844d630\" (UID: \"6b40d9ff-7cf3-4538-9667-10a25844d630\") " Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:45.654629 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-dns-svc\") pod \"6b40d9ff-7cf3-4538-9667-10a25844d630\" (UID: \"6b40d9ff-7cf3-4538-9667-10a25844d630\") " Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:45.654679 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-config\") pod \"6b40d9ff-7cf3-4538-9667-10a25844d630\" (UID: \"6b40d9ff-7cf3-4538-9667-10a25844d630\") " Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:45.654800 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k8hv\" (UniqueName: \"kubernetes.io/projected/6b40d9ff-7cf3-4538-9667-10a25844d630-kube-api-access-5k8hv\") pod \"6b40d9ff-7cf3-4538-9667-10a25844d630\" (UID: \"6b40d9ff-7cf3-4538-9667-10a25844d630\") " Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:45.674999 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b40d9ff-7cf3-4538-9667-10a25844d630-kube-api-access-5k8hv" (OuterVolumeSpecName: "kube-api-access-5k8hv") pod "6b40d9ff-7cf3-4538-9667-10a25844d630" (UID: "6b40d9ff-7cf3-4538-9667-10a25844d630"). InnerVolumeSpecName "kube-api-access-5k8hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:45.708254 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-config" (OuterVolumeSpecName: "config") pod "6b40d9ff-7cf3-4538-9667-10a25844d630" (UID: "6b40d9ff-7cf3-4538-9667-10a25844d630"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:45.714066 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6b40d9ff-7cf3-4538-9667-10a25844d630" (UID: "6b40d9ff-7cf3-4538-9667-10a25844d630"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:45.716743 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6b40d9ff-7cf3-4538-9667-10a25844d630" (UID: "6b40d9ff-7cf3-4538-9667-10a25844d630"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:45.756055 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:45.756136 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k8hv\" (UniqueName: \"kubernetes.io/projected/6b40d9ff-7cf3-4538-9667-10a25844d630-kube-api-access-5k8hv\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:45.756149 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:45 crc kubenswrapper[4925]: I0202 11:16:45.756157 4925 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b40d9ff-7cf3-4538-9667-10a25844d630-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:46 crc kubenswrapper[4925]: I0202 11:16:46.000879 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" event={"ID":"6b40d9ff-7cf3-4538-9667-10a25844d630","Type":"ContainerDied","Data":"fafc947d694b7d2061f474c732ef2cbfe253899b66d860b825ee21d7cacd366a"} Feb 02 11:16:46 crc kubenswrapper[4925]: I0202 11:16:46.000925 4925 scope.go:117] "RemoveContainer" containerID="4d36ce70d735e148575b6ca3c0f05b51ea07c30f6698160f6024a4a1b8c8e3d8" Feb 02 11:16:46 crc kubenswrapper[4925]: I0202 11:16:46.001064 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-gkjgn" Feb 02 11:16:46 crc kubenswrapper[4925]: I0202 11:16:46.028796 4925 scope.go:117] "RemoveContainer" containerID="82f5ad9068dfa43e3dd4c2de402c62231b71caba89fdd536b4ace623bb40db04" Feb 02 11:16:46 crc kubenswrapper[4925]: I0202 11:16:46.034052 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-gkjgn"] Feb 02 11:16:46 crc kubenswrapper[4925]: I0202 11:16:46.045329 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-gkjgn"] Feb 02 11:16:46 crc kubenswrapper[4925]: E0202 11:16:46.094325 4925 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.195:35216->38.102.83.195:41693: read tcp 38.102.83.195:35216->38.102.83.195:41693: read: connection reset by peer Feb 02 11:16:46 crc kubenswrapper[4925]: I0202 11:16:46.737243 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b40d9ff-7cf3-4538-9667-10a25844d630" path="/var/lib/kubelet/pods/6b40d9ff-7cf3-4538-9667-10a25844d630/volumes" Feb 02 11:16:48 crc kubenswrapper[4925]: I0202 11:16:48.043641 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 02 11:16:48 crc kubenswrapper[4925]: I0202 11:16:48.043898 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 02 11:16:49 crc kubenswrapper[4925]: I0202 11:16:49.025452 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3b370a7b-282f-4481-8275-39c981b54f35","Type":"ContainerStarted","Data":"3e739656b9a3792539d7333c8a51640e6cd90b086a9e05aea76213696d0f05f9"} Feb 02 11:16:49 crc kubenswrapper[4925]: I0202 11:16:49.472156 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 02 11:16:49 crc kubenswrapper[4925]: I0202 11:16:49.472492 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 02 11:16:49 crc kubenswrapper[4925]: I0202 11:16:49.554715 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 02 11:16:50 crc kubenswrapper[4925]: I0202 11:16:50.034345 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3b370a7b-282f-4481-8275-39c981b54f35","Type":"ContainerStarted","Data":"443777b7d22d874c9257f265549cbe2af55c951be3cdf96735bcf25b9a5ec442"} Feb 02 11:16:50 crc kubenswrapper[4925]: I0202 11:16:50.034490 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 02 11:16:50 crc kubenswrapper[4925]: I0202 11:16:50.036253 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c60f25de-220a-4eb1-b1da-30faf1a27cf8","Type":"ContainerStarted","Data":"567a184ef5a1fb0ab12f72ce12aae63f1e5b06dc4def9e889f7db3a284410b25"} Feb 02 11:16:50 crc kubenswrapper[4925]: I0202 11:16:50.036539 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 11:16:50 crc kubenswrapper[4925]: I0202 11:16:50.056276 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.050009967 podStartE2EDuration="13.056254022s" podCreationTimestamp="2026-02-02 11:16:37 +0000 UTC" firstStartedPulling="2026-02-02 11:16:38.294566015 +0000 UTC m=+1175.298814977" lastFinishedPulling="2026-02-02 11:16:46.30081007 +0000 UTC m=+1183.305059032" observedRunningTime="2026-02-02 11:16:50.054996369 +0000 UTC m=+1187.059245361" watchObservedRunningTime="2026-02-02 11:16:50.056254022 +0000 UTC m=+1187.060502994" Feb 02 11:16:50 crc kubenswrapper[4925]: I0202 11:16:50.076985 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=20.116919835 podStartE2EDuration="1m49.076965375s" podCreationTimestamp="2026-02-02 11:15:01 +0000 UTC" firstStartedPulling="2026-02-02 11:15:20.407465732 +0000 UTC m=+1097.411714694" lastFinishedPulling="2026-02-02 11:16:49.367511272 +0000 UTC m=+1186.371760234" observedRunningTime="2026-02-02 11:16:50.071762636 +0000 UTC m=+1187.076011598" watchObservedRunningTime="2026-02-02 11:16:50.076965375 +0000 UTC m=+1187.081214337" Feb 02 11:16:50 crc kubenswrapper[4925]: I0202 11:16:50.115203 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 02 11:16:50 crc kubenswrapper[4925]: I0202 11:16:50.373661 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 02 11:16:50 crc kubenswrapper[4925]: I0202 11:16:50.449150 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 02 11:16:56 crc kubenswrapper[4925]: I0202 11:16:56.787408 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5jllb"] Feb 02 11:16:56 crc kubenswrapper[4925]: E0202 11:16:56.788255 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b40d9ff-7cf3-4538-9667-10a25844d630" containerName="init" Feb 02 11:16:56 crc kubenswrapper[4925]: I0202 11:16:56.788268 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b40d9ff-7cf3-4538-9667-10a25844d630" containerName="init" Feb 02 11:16:56 crc kubenswrapper[4925]: E0202 11:16:56.788282 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b40d9ff-7cf3-4538-9667-10a25844d630" containerName="dnsmasq-dns" Feb 02 11:16:56 crc kubenswrapper[4925]: I0202 11:16:56.788287 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b40d9ff-7cf3-4538-9667-10a25844d630" containerName="dnsmasq-dns" Feb 02 11:16:56 crc kubenswrapper[4925]: I0202 11:16:56.788441 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b40d9ff-7cf3-4538-9667-10a25844d630" containerName="dnsmasq-dns" Feb 02 11:16:56 crc kubenswrapper[4925]: I0202 11:16:56.788928 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5jllb" Feb 02 11:16:56 crc kubenswrapper[4925]: I0202 11:16:56.791327 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 11:16:56 crc kubenswrapper[4925]: I0202 11:16:56.800689 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5jllb"] Feb 02 11:16:56 crc kubenswrapper[4925]: I0202 11:16:56.935120 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869547e9-b651-4732-a999-d2de0d0f387c-operator-scripts\") pod \"root-account-create-update-5jllb\" (UID: \"869547e9-b651-4732-a999-d2de0d0f387c\") " pod="openstack/root-account-create-update-5jllb" Feb 02 11:16:56 crc kubenswrapper[4925]: I0202 11:16:56.935179 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smw5n\" (UniqueName: \"kubernetes.io/projected/869547e9-b651-4732-a999-d2de0d0f387c-kube-api-access-smw5n\") pod \"root-account-create-update-5jllb\" (UID: \"869547e9-b651-4732-a999-d2de0d0f387c\") " pod="openstack/root-account-create-update-5jllb" Feb 02 11:16:57 crc kubenswrapper[4925]: I0202 11:16:57.036632 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869547e9-b651-4732-a999-d2de0d0f387c-operator-scripts\") pod \"root-account-create-update-5jllb\" (UID: \"869547e9-b651-4732-a999-d2de0d0f387c\") " pod="openstack/root-account-create-update-5jllb" Feb 02 11:16:57 crc kubenswrapper[4925]: I0202 11:16:57.036691 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smw5n\" (UniqueName: \"kubernetes.io/projected/869547e9-b651-4732-a999-d2de0d0f387c-kube-api-access-smw5n\") pod \"root-account-create-update-5jllb\" (UID: \"869547e9-b651-4732-a999-d2de0d0f387c\") " pod="openstack/root-account-create-update-5jllb" Feb 02 11:16:57 crc kubenswrapper[4925]: I0202 11:16:57.037834 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869547e9-b651-4732-a999-d2de0d0f387c-operator-scripts\") pod \"root-account-create-update-5jllb\" (UID: \"869547e9-b651-4732-a999-d2de0d0f387c\") " pod="openstack/root-account-create-update-5jllb" Feb 02 11:16:57 crc kubenswrapper[4925]: I0202 11:16:57.060889 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smw5n\" (UniqueName: \"kubernetes.io/projected/869547e9-b651-4732-a999-d2de0d0f387c-kube-api-access-smw5n\") pod \"root-account-create-update-5jllb\" (UID: \"869547e9-b651-4732-a999-d2de0d0f387c\") " pod="openstack/root-account-create-update-5jllb" Feb 02 11:16:57 crc kubenswrapper[4925]: I0202 11:16:57.109736 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5jllb" Feb 02 11:16:57 crc kubenswrapper[4925]: I0202 11:16:57.513922 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5jllb"] Feb 02 11:16:57 crc kubenswrapper[4925]: W0202 11:16:57.517349 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod869547e9_b651_4732_a999_d2de0d0f387c.slice/crio-00ef110041d94fc0937e5a71cc76bf0c5b89d25bba2d8f155c8bb833016029ae WatchSource:0}: Error finding container 00ef110041d94fc0937e5a71cc76bf0c5b89d25bba2d8f155c8bb833016029ae: Status 404 returned error can't find the container with id 00ef110041d94fc0937e5a71cc76bf0c5b89d25bba2d8f155c8bb833016029ae Feb 02 11:16:58 crc kubenswrapper[4925]: I0202 11:16:58.096439 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5jllb" event={"ID":"869547e9-b651-4732-a999-d2de0d0f387c","Type":"ContainerStarted","Data":"7fd372f8ba4e79c3fe103d832c23ac29312a39d94769378b7c350666a1d37cb6"} Feb 02 11:16:58 crc kubenswrapper[4925]: I0202 11:16:58.096773 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5jllb" event={"ID":"869547e9-b651-4732-a999-d2de0d0f387c","Type":"ContainerStarted","Data":"00ef110041d94fc0937e5a71cc76bf0c5b89d25bba2d8f155c8bb833016029ae"} Feb 02 11:16:58 crc kubenswrapper[4925]: I0202 11:16:58.112954 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-5jllb" podStartSLOduration=2.112939325 podStartE2EDuration="2.112939325s" podCreationTimestamp="2026-02-02 11:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:16:58.110277194 +0000 UTC m=+1195.114526166" watchObservedRunningTime="2026-02-02 11:16:58.112939325 +0000 UTC m=+1195.117188287" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.402913 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-qpjpr"] Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.404052 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qpjpr" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.412769 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qpjpr"] Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.475616 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7159077-105a-48ca-96f6-12ffb19c7a93-operator-scripts\") pod \"keystone-db-create-qpjpr\" (UID: \"d7159077-105a-48ca-96f6-12ffb19c7a93\") " pod="openstack/keystone-db-create-qpjpr" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.475754 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qdpz\" (UniqueName: \"kubernetes.io/projected/d7159077-105a-48ca-96f6-12ffb19c7a93-kube-api-access-8qdpz\") pod \"keystone-db-create-qpjpr\" (UID: \"d7159077-105a-48ca-96f6-12ffb19c7a93\") " pod="openstack/keystone-db-create-qpjpr" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.512196 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a0ad-account-create-update-5n6nq"] Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.513726 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a0ad-account-create-update-5n6nq" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.516697 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.525428 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a0ad-account-create-update-5n6nq"] Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.577206 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7159077-105a-48ca-96f6-12ffb19c7a93-operator-scripts\") pod \"keystone-db-create-qpjpr\" (UID: \"d7159077-105a-48ca-96f6-12ffb19c7a93\") " pod="openstack/keystone-db-create-qpjpr" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.577293 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qdpz\" (UniqueName: \"kubernetes.io/projected/d7159077-105a-48ca-96f6-12ffb19c7a93-kube-api-access-8qdpz\") pod \"keystone-db-create-qpjpr\" (UID: \"d7159077-105a-48ca-96f6-12ffb19c7a93\") " pod="openstack/keystone-db-create-qpjpr" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.578925 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7159077-105a-48ca-96f6-12ffb19c7a93-operator-scripts\") pod \"keystone-db-create-qpjpr\" (UID: \"d7159077-105a-48ca-96f6-12ffb19c7a93\") " pod="openstack/keystone-db-create-qpjpr" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.597338 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qdpz\" (UniqueName: \"kubernetes.io/projected/d7159077-105a-48ca-96f6-12ffb19c7a93-kube-api-access-8qdpz\") pod \"keystone-db-create-qpjpr\" (UID: \"d7159077-105a-48ca-96f6-12ffb19c7a93\") " pod="openstack/keystone-db-create-qpjpr" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.679318 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5a7454-06cf-426b-93f7-8d0c2b0a27d2-operator-scripts\") pod \"keystone-a0ad-account-create-update-5n6nq\" (UID: \"db5a7454-06cf-426b-93f7-8d0c2b0a27d2\") " pod="openstack/keystone-a0ad-account-create-update-5n6nq" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.679422 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km828\" (UniqueName: \"kubernetes.io/projected/db5a7454-06cf-426b-93f7-8d0c2b0a27d2-kube-api-access-km828\") pod \"keystone-a0ad-account-create-update-5n6nq\" (UID: \"db5a7454-06cf-426b-93f7-8d0c2b0a27d2\") " pod="openstack/keystone-a0ad-account-create-update-5n6nq" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.719710 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qpjpr" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.736316 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-tz6vw"] Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.737499 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tz6vw" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.750582 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-tz6vw"] Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.781766 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5a7454-06cf-426b-93f7-8d0c2b0a27d2-operator-scripts\") pod \"keystone-a0ad-account-create-update-5n6nq\" (UID: \"db5a7454-06cf-426b-93f7-8d0c2b0a27d2\") " pod="openstack/keystone-a0ad-account-create-update-5n6nq" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.781865 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km828\" (UniqueName: \"kubernetes.io/projected/db5a7454-06cf-426b-93f7-8d0c2b0a27d2-kube-api-access-km828\") pod \"keystone-a0ad-account-create-update-5n6nq\" (UID: \"db5a7454-06cf-426b-93f7-8d0c2b0a27d2\") " pod="openstack/keystone-a0ad-account-create-update-5n6nq" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.782895 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5a7454-06cf-426b-93f7-8d0c2b0a27d2-operator-scripts\") pod \"keystone-a0ad-account-create-update-5n6nq\" (UID: \"db5a7454-06cf-426b-93f7-8d0c2b0a27d2\") " pod="openstack/keystone-a0ad-account-create-update-5n6nq" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.800203 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km828\" (UniqueName: \"kubernetes.io/projected/db5a7454-06cf-426b-93f7-8d0c2b0a27d2-kube-api-access-km828\") pod \"keystone-a0ad-account-create-update-5n6nq\" (UID: \"db5a7454-06cf-426b-93f7-8d0c2b0a27d2\") " pod="openstack/keystone-a0ad-account-create-update-5n6nq" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.837341 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a0ad-account-create-update-5n6nq" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.846774 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-b182-account-create-update-sxbt4"] Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.848238 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b182-account-create-update-sxbt4" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.850345 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.883143 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050787f9-1101-4195-9b1b-1f1b5fa090cd-operator-scripts\") pod \"placement-db-create-tz6vw\" (UID: \"050787f9-1101-4195-9b1b-1f1b5fa090cd\") " pod="openstack/placement-db-create-tz6vw" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.883575 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbzfw\" (UniqueName: \"kubernetes.io/projected/050787f9-1101-4195-9b1b-1f1b5fa090cd-kube-api-access-pbzfw\") pod \"placement-db-create-tz6vw\" (UID: \"050787f9-1101-4195-9b1b-1f1b5fa090cd\") " pod="openstack/placement-db-create-tz6vw" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.912799 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b182-account-create-update-sxbt4"] Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.984678 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7070fb66-e433-4f35-b5a7-b4666e8c0638-operator-scripts\") pod \"placement-b182-account-create-update-sxbt4\" (UID: \"7070fb66-e433-4f35-b5a7-b4666e8c0638\") " pod="openstack/placement-b182-account-create-update-sxbt4" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.984754 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050787f9-1101-4195-9b1b-1f1b5fa090cd-operator-scripts\") pod \"placement-db-create-tz6vw\" (UID: \"050787f9-1101-4195-9b1b-1f1b5fa090cd\") " pod="openstack/placement-db-create-tz6vw" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.984779 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5t9z\" (UniqueName: \"kubernetes.io/projected/7070fb66-e433-4f35-b5a7-b4666e8c0638-kube-api-access-v5t9z\") pod \"placement-b182-account-create-update-sxbt4\" (UID: \"7070fb66-e433-4f35-b5a7-b4666e8c0638\") " pod="openstack/placement-b182-account-create-update-sxbt4" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.984812 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbzfw\" (UniqueName: \"kubernetes.io/projected/050787f9-1101-4195-9b1b-1f1b5fa090cd-kube-api-access-pbzfw\") pod \"placement-db-create-tz6vw\" (UID: \"050787f9-1101-4195-9b1b-1f1b5fa090cd\") " pod="openstack/placement-db-create-tz6vw" Feb 02 11:16:59 crc kubenswrapper[4925]: I0202 11:16:59.986070 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050787f9-1101-4195-9b1b-1f1b5fa090cd-operator-scripts\") pod \"placement-db-create-tz6vw\" (UID: \"050787f9-1101-4195-9b1b-1f1b5fa090cd\") " pod="openstack/placement-db-create-tz6vw" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.008674 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-cvgcr"] Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.009860 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cvgcr" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.015477 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cvgcr"] Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.016174 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbzfw\" (UniqueName: \"kubernetes.io/projected/050787f9-1101-4195-9b1b-1f1b5fa090cd-kube-api-access-pbzfw\") pod \"placement-db-create-tz6vw\" (UID: \"050787f9-1101-4195-9b1b-1f1b5fa090cd\") " pod="openstack/placement-db-create-tz6vw" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.085833 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwfzl\" (UniqueName: \"kubernetes.io/projected/07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf-kube-api-access-xwfzl\") pod \"glance-db-create-cvgcr\" (UID: \"07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf\") " pod="openstack/glance-db-create-cvgcr" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.085889 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5t9z\" (UniqueName: \"kubernetes.io/projected/7070fb66-e433-4f35-b5a7-b4666e8c0638-kube-api-access-v5t9z\") pod \"placement-b182-account-create-update-sxbt4\" (UID: \"7070fb66-e433-4f35-b5a7-b4666e8c0638\") " pod="openstack/placement-b182-account-create-update-sxbt4" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.085925 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf-operator-scripts\") pod \"glance-db-create-cvgcr\" (UID: \"07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf\") " pod="openstack/glance-db-create-cvgcr" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.086107 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7070fb66-e433-4f35-b5a7-b4666e8c0638-operator-scripts\") pod \"placement-b182-account-create-update-sxbt4\" (UID: \"7070fb66-e433-4f35-b5a7-b4666e8c0638\") " pod="openstack/placement-b182-account-create-update-sxbt4" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.087073 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7070fb66-e433-4f35-b5a7-b4666e8c0638-operator-scripts\") pod \"placement-b182-account-create-update-sxbt4\" (UID: \"7070fb66-e433-4f35-b5a7-b4666e8c0638\") " pod="openstack/placement-b182-account-create-update-sxbt4" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.103379 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5t9z\" (UniqueName: \"kubernetes.io/projected/7070fb66-e433-4f35-b5a7-b4666e8c0638-kube-api-access-v5t9z\") pod \"placement-b182-account-create-update-sxbt4\" (UID: \"7070fb66-e433-4f35-b5a7-b4666e8c0638\") " pod="openstack/placement-b182-account-create-update-sxbt4" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.111787 4925 generic.go:334] "Generic (PLEG): container finished" podID="869547e9-b651-4732-a999-d2de0d0f387c" containerID="7fd372f8ba4e79c3fe103d832c23ac29312a39d94769378b7c350666a1d37cb6" exitCode=0 Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.111826 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5jllb" event={"ID":"869547e9-b651-4732-a999-d2de0d0f387c","Type":"ContainerDied","Data":"7fd372f8ba4e79c3fe103d832c23ac29312a39d94769378b7c350666a1d37cb6"} Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.150520 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-efb2-account-create-update-tnvpf"] Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.151874 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-efb2-account-create-update-tnvpf" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.154160 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.161041 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-efb2-account-create-update-tnvpf"] Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.187650 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf-operator-scripts\") pod \"glance-db-create-cvgcr\" (UID: \"07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf\") " pod="openstack/glance-db-create-cvgcr" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.187801 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwfzl\" (UniqueName: \"kubernetes.io/projected/07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf-kube-api-access-xwfzl\") pod \"glance-db-create-cvgcr\" (UID: \"07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf\") " pod="openstack/glance-db-create-cvgcr" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.188882 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf-operator-scripts\") pod \"glance-db-create-cvgcr\" (UID: \"07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf\") " pod="openstack/glance-db-create-cvgcr" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.205866 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwfzl\" (UniqueName: \"kubernetes.io/projected/07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf-kube-api-access-xwfzl\") pod \"glance-db-create-cvgcr\" (UID: \"07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf\") " pod="openstack/glance-db-create-cvgcr" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.213907 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tz6vw" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.216207 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qpjpr"] Feb 02 11:17:00 crc kubenswrapper[4925]: W0202 11:17:00.217233 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7159077_105a_48ca_96f6_12ffb19c7a93.slice/crio-a8c447f47dc87fc9d685e9be308ca6067a46277146b6a9fbe260bbe8ae79a837 WatchSource:0}: Error finding container a8c447f47dc87fc9d685e9be308ca6067a46277146b6a9fbe260bbe8ae79a837: Status 404 returned error can't find the container with id a8c447f47dc87fc9d685e9be308ca6067a46277146b6a9fbe260bbe8ae79a837 Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.225293 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b182-account-create-update-sxbt4" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.289164 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5081cd7-3a75-408b-a30d-eb1156689a3d-operator-scripts\") pod \"glance-efb2-account-create-update-tnvpf\" (UID: \"e5081cd7-3a75-408b-a30d-eb1156689a3d\") " pod="openstack/glance-efb2-account-create-update-tnvpf" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.289448 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdxng\" (UniqueName: \"kubernetes.io/projected/e5081cd7-3a75-408b-a30d-eb1156689a3d-kube-api-access-qdxng\") pod \"glance-efb2-account-create-update-tnvpf\" (UID: \"e5081cd7-3a75-408b-a30d-eb1156689a3d\") " pod="openstack/glance-efb2-account-create-update-tnvpf" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.332356 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cvgcr" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.348937 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a0ad-account-create-update-5n6nq"] Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.390472 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5081cd7-3a75-408b-a30d-eb1156689a3d-operator-scripts\") pod \"glance-efb2-account-create-update-tnvpf\" (UID: \"e5081cd7-3a75-408b-a30d-eb1156689a3d\") " pod="openstack/glance-efb2-account-create-update-tnvpf" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.390595 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdxng\" (UniqueName: \"kubernetes.io/projected/e5081cd7-3a75-408b-a30d-eb1156689a3d-kube-api-access-qdxng\") pod \"glance-efb2-account-create-update-tnvpf\" (UID: \"e5081cd7-3a75-408b-a30d-eb1156689a3d\") " pod="openstack/glance-efb2-account-create-update-tnvpf" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.391602 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5081cd7-3a75-408b-a30d-eb1156689a3d-operator-scripts\") pod \"glance-efb2-account-create-update-tnvpf\" (UID: \"e5081cd7-3a75-408b-a30d-eb1156689a3d\") " pod="openstack/glance-efb2-account-create-update-tnvpf" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.408561 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdxng\" (UniqueName: \"kubernetes.io/projected/e5081cd7-3a75-408b-a30d-eb1156689a3d-kube-api-access-qdxng\") pod \"glance-efb2-account-create-update-tnvpf\" (UID: \"e5081cd7-3a75-408b-a30d-eb1156689a3d\") " pod="openstack/glance-efb2-account-create-update-tnvpf" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.468319 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-efb2-account-create-update-tnvpf" Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.664295 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-tz6vw"] Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.733809 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b182-account-create-update-sxbt4"] Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.830705 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cvgcr"] Feb 02 11:17:00 crc kubenswrapper[4925]: W0202 11:17:00.944198 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5081cd7_3a75_408b_a30d_eb1156689a3d.slice/crio-dd97c774647d45748dbde019e5347fe3bda92b984172a27dff757c89b1e27bb0 WatchSource:0}: Error finding container dd97c774647d45748dbde019e5347fe3bda92b984172a27dff757c89b1e27bb0: Status 404 returned error can't find the container with id dd97c774647d45748dbde019e5347fe3bda92b984172a27dff757c89b1e27bb0 Feb 02 11:17:00 crc kubenswrapper[4925]: I0202 11:17:00.946776 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-efb2-account-create-update-tnvpf"] Feb 02 11:17:01 crc kubenswrapper[4925]: I0202 11:17:01.119102 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cvgcr" event={"ID":"07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf","Type":"ContainerStarted","Data":"d21fd62010b566bd46735386ea7aa31c4d22d6ac70ad56c715d2e03a7ecebd3c"} Feb 02 11:17:01 crc kubenswrapper[4925]: I0202 11:17:01.121005 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a0ad-account-create-update-5n6nq" event={"ID":"db5a7454-06cf-426b-93f7-8d0c2b0a27d2","Type":"ContainerStarted","Data":"dad04db3ee442aa4dc2244e2e6560f33c536ca1324b90626d5940cec99a72291"} Feb 02 11:17:01 crc kubenswrapper[4925]: I0202 11:17:01.122585 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qpjpr" event={"ID":"d7159077-105a-48ca-96f6-12ffb19c7a93","Type":"ContainerStarted","Data":"a8c447f47dc87fc9d685e9be308ca6067a46277146b6a9fbe260bbe8ae79a837"} Feb 02 11:17:01 crc kubenswrapper[4925]: I0202 11:17:01.124473 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-efb2-account-create-update-tnvpf" event={"ID":"e5081cd7-3a75-408b-a30d-eb1156689a3d","Type":"ContainerStarted","Data":"dd97c774647d45748dbde019e5347fe3bda92b984172a27dff757c89b1e27bb0"} Feb 02 11:17:01 crc kubenswrapper[4925]: I0202 11:17:01.125648 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tz6vw" event={"ID":"050787f9-1101-4195-9b1b-1f1b5fa090cd","Type":"ContainerStarted","Data":"3e3b716a373f96478130c474cb347f78b8ff00a9df2c62fe2ef14b798edc03bc"} Feb 02 11:17:01 crc kubenswrapper[4925]: I0202 11:17:01.127208 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b182-account-create-update-sxbt4" event={"ID":"7070fb66-e433-4f35-b5a7-b4666e8c0638","Type":"ContainerStarted","Data":"a78b3ec61d0f5aaa4d47663359d53227bbcd735d3685f03530f62165e6551be9"} Feb 02 11:17:01 crc kubenswrapper[4925]: I0202 11:17:01.404026 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5jllb" Feb 02 11:17:01 crc kubenswrapper[4925]: I0202 11:17:01.507241 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smw5n\" (UniqueName: \"kubernetes.io/projected/869547e9-b651-4732-a999-d2de0d0f387c-kube-api-access-smw5n\") pod \"869547e9-b651-4732-a999-d2de0d0f387c\" (UID: \"869547e9-b651-4732-a999-d2de0d0f387c\") " Feb 02 11:17:01 crc kubenswrapper[4925]: I0202 11:17:01.507346 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869547e9-b651-4732-a999-d2de0d0f387c-operator-scripts\") pod \"869547e9-b651-4732-a999-d2de0d0f387c\" (UID: \"869547e9-b651-4732-a999-d2de0d0f387c\") " Feb 02 11:17:01 crc kubenswrapper[4925]: I0202 11:17:01.508210 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869547e9-b651-4732-a999-d2de0d0f387c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "869547e9-b651-4732-a999-d2de0d0f387c" (UID: "869547e9-b651-4732-a999-d2de0d0f387c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:01 crc kubenswrapper[4925]: I0202 11:17:01.516064 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 11:17:01 crc kubenswrapper[4925]: I0202 11:17:01.517232 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869547e9-b651-4732-a999-d2de0d0f387c-kube-api-access-smw5n" (OuterVolumeSpecName: "kube-api-access-smw5n") pod "869547e9-b651-4732-a999-d2de0d0f387c" (UID: "869547e9-b651-4732-a999-d2de0d0f387c"). InnerVolumeSpecName "kube-api-access-smw5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:01 crc kubenswrapper[4925]: I0202 11:17:01.609289 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smw5n\" (UniqueName: \"kubernetes.io/projected/869547e9-b651-4732-a999-d2de0d0f387c-kube-api-access-smw5n\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:01 crc kubenswrapper[4925]: I0202 11:17:01.609329 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869547e9-b651-4732-a999-d2de0d0f387c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:02 crc kubenswrapper[4925]: I0202 11:17:02.134893 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5jllb" event={"ID":"869547e9-b651-4732-a999-d2de0d0f387c","Type":"ContainerDied","Data":"00ef110041d94fc0937e5a71cc76bf0c5b89d25bba2d8f155c8bb833016029ae"} Feb 02 11:17:02 crc kubenswrapper[4925]: I0202 11:17:02.135270 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00ef110041d94fc0937e5a71cc76bf0c5b89d25bba2d8f155c8bb833016029ae" Feb 02 11:17:02 crc kubenswrapper[4925]: I0202 11:17:02.134933 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5jllb" Feb 02 11:17:02 crc kubenswrapper[4925]: I0202 11:17:02.136355 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qpjpr" event={"ID":"d7159077-105a-48ca-96f6-12ffb19c7a93","Type":"ContainerStarted","Data":"c4c4ebf5346373705514add06e68c4e0277719ed17a240b284ea32573bdbb334"} Feb 02 11:17:03 crc kubenswrapper[4925]: I0202 11:17:03.074986 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5jllb"] Feb 02 11:17:03 crc kubenswrapper[4925]: I0202 11:17:03.081015 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5jllb"] Feb 02 11:17:03 crc kubenswrapper[4925]: I0202 11:17:03.144682 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a0ad-account-create-update-5n6nq" event={"ID":"db5a7454-06cf-426b-93f7-8d0c2b0a27d2","Type":"ContainerStarted","Data":"ef83eeb42f23f8ad93df1ed6b1afc34c5562730eba18eaed7322434108d17424"} Feb 02 11:17:04 crc kubenswrapper[4925]: I0202 11:17:04.677667 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869547e9-b651-4732-a999-d2de0d0f387c" path="/var/lib/kubelet/pods/869547e9-b651-4732-a999-d2de0d0f387c/volumes" Feb 02 11:17:05 crc kubenswrapper[4925]: I0202 11:17:05.166720 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-efb2-account-create-update-tnvpf" event={"ID":"e5081cd7-3a75-408b-a30d-eb1156689a3d","Type":"ContainerStarted","Data":"5eda4a561543db11e81df459f7c307989e1184aa91e335a2b63c30a73bb3bb61"} Feb 02 11:17:05 crc kubenswrapper[4925]: I0202 11:17:05.597981 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gr5rf" podUID="feb2b36a-609f-4805-8b50-fe0731522375" containerName="ovn-controller" probeResult="failure" output=< Feb 02 11:17:05 crc kubenswrapper[4925]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 11:17:05 crc kubenswrapper[4925]: > Feb 02 11:17:05 crc kubenswrapper[4925]: I0202 11:17:05.616232 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:17:05 crc kubenswrapper[4925]: I0202 11:17:05.621951 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-26w5s" Feb 02 11:17:05 crc kubenswrapper[4925]: I0202 11:17:05.832349 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gr5rf-config-l54xc"] Feb 02 11:17:05 crc kubenswrapper[4925]: E0202 11:17:05.832716 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869547e9-b651-4732-a999-d2de0d0f387c" containerName="mariadb-account-create-update" Feb 02 11:17:05 crc kubenswrapper[4925]: I0202 11:17:05.832736 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="869547e9-b651-4732-a999-d2de0d0f387c" containerName="mariadb-account-create-update" Feb 02 11:17:05 crc kubenswrapper[4925]: I0202 11:17:05.832986 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="869547e9-b651-4732-a999-d2de0d0f387c" containerName="mariadb-account-create-update" Feb 02 11:17:05 crc kubenswrapper[4925]: I0202 11:17:05.833559 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:05 crc kubenswrapper[4925]: I0202 11:17:05.847758 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gr5rf-config-l54xc"] Feb 02 11:17:05 crc kubenswrapper[4925]: I0202 11:17:05.849398 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 11:17:05 crc kubenswrapper[4925]: I0202 11:17:05.975615 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-run-ovn\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:05 crc kubenswrapper[4925]: I0202 11:17:05.975708 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-log-ovn\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:05 crc kubenswrapper[4925]: I0202 11:17:05.975961 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a05798ea-e028-47df-9773-41ee87d76c15-additional-scripts\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:05 crc kubenswrapper[4925]: I0202 11:17:05.976093 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-run\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:05 crc kubenswrapper[4925]: I0202 11:17:05.976127 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7blpn\" (UniqueName: \"kubernetes.io/projected/a05798ea-e028-47df-9773-41ee87d76c15-kube-api-access-7blpn\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:05 crc kubenswrapper[4925]: I0202 11:17:05.976230 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a05798ea-e028-47df-9773-41ee87d76c15-scripts\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.077810 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-log-ovn\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.077886 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a05798ea-e028-47df-9773-41ee87d76c15-additional-scripts\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.077921 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-run\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.077942 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7blpn\" (UniqueName: \"kubernetes.io/projected/a05798ea-e028-47df-9773-41ee87d76c15-kube-api-access-7blpn\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.077969 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a05798ea-e028-47df-9773-41ee87d76c15-scripts\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.078009 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-run-ovn\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.078229 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-run\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.078283 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-run-ovn\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.078656 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a05798ea-e028-47df-9773-41ee87d76c15-additional-scripts\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.080357 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a05798ea-e028-47df-9773-41ee87d76c15-scripts\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.080451 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-log-ovn\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.111427 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7blpn\" (UniqueName: \"kubernetes.io/projected/a05798ea-e028-47df-9773-41ee87d76c15-kube-api-access-7blpn\") pod \"ovn-controller-gr5rf-config-l54xc\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.155424 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.184291 4925 generic.go:334] "Generic (PLEG): container finished" podID="0d6b9691-80b3-418b-a4c7-fc80c0438123" containerID="4b576d38c799222ec6a0ca164153d6ae54494ef632ee6de450da84c205e09e73" exitCode=0 Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.184361 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0d6b9691-80b3-418b-a4c7-fc80c0438123","Type":"ContainerDied","Data":"4b576d38c799222ec6a0ca164153d6ae54494ef632ee6de450da84c205e09e73"} Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.188519 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b182-account-create-update-sxbt4" event={"ID":"7070fb66-e433-4f35-b5a7-b4666e8c0638","Type":"ContainerStarted","Data":"d48df7b05a145a6ec40fca6d9424318169933e1edae8259e9729c18784d17f67"} Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.190664 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cvgcr" event={"ID":"07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf","Type":"ContainerStarted","Data":"1f7a8a364bd2ddcd81af382722407f6853456a34c64e5002cf9943aefcebf439"} Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.191843 4925 generic.go:334] "Generic (PLEG): container finished" podID="435dc982-a475-4753-81d0-58bff20a6f17" containerID="34fc9e387138d7a2caf08e6dd7e3e70fa91f331ed160a0b917b941363bbff6f5" exitCode=0 Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.191894 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"435dc982-a475-4753-81d0-58bff20a6f17","Type":"ContainerDied","Data":"34fc9e387138d7a2caf08e6dd7e3e70fa91f331ed160a0b917b941363bbff6f5"} Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.201137 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tz6vw" event={"ID":"050787f9-1101-4195-9b1b-1f1b5fa090cd","Type":"ContainerStarted","Data":"007227535bdf2c53462059f2b1f0a94457acba5fee127eb94bd29005c837b34d"} Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.238703 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-qpjpr" podStartSLOduration=7.238686959 podStartE2EDuration="7.238686959s" podCreationTimestamp="2026-02-02 11:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:17:06.230551462 +0000 UTC m=+1203.234800454" watchObservedRunningTime="2026-02-02 11:17:06.238686959 +0000 UTC m=+1203.242935911" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.255878 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-efb2-account-create-update-tnvpf" podStartSLOduration=6.255860027 podStartE2EDuration="6.255860027s" podCreationTimestamp="2026-02-02 11:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:17:06.250531725 +0000 UTC m=+1203.254780707" watchObservedRunningTime="2026-02-02 11:17:06.255860027 +0000 UTC m=+1203.260108979" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.328570 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-a0ad-account-create-update-5n6nq" podStartSLOduration=7.328543606 podStartE2EDuration="7.328543606s" podCreationTimestamp="2026-02-02 11:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:17:06.29794834 +0000 UTC m=+1203.302197312" watchObservedRunningTime="2026-02-02 11:17:06.328543606 +0000 UTC m=+1203.332792568" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.660439 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gr5rf-config-l54xc"] Feb 02 11:17:06 crc kubenswrapper[4925]: W0202 11:17:06.666172 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda05798ea_e028_47df_9773_41ee87d76c15.slice/crio-c4990471ae57ad6e9699ea8b1ddf3dbc6bfd7a2584b6321c1e41b4a118f67ead WatchSource:0}: Error finding container c4990471ae57ad6e9699ea8b1ddf3dbc6bfd7a2584b6321c1e41b4a118f67ead: Status 404 returned error can't find the container with id c4990471ae57ad6e9699ea8b1ddf3dbc6bfd7a2584b6321c1e41b4a118f67ead Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.819970 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dhrzg"] Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.821336 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dhrzg" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.823575 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.840983 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dhrzg"] Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.895401 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/698a5a7f-70fd-4a7f-abbb-6c648b30bd27-operator-scripts\") pod \"root-account-create-update-dhrzg\" (UID: \"698a5a7f-70fd-4a7f-abbb-6c648b30bd27\") " pod="openstack/root-account-create-update-dhrzg" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.895524 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxr8p\" (UniqueName: \"kubernetes.io/projected/698a5a7f-70fd-4a7f-abbb-6c648b30bd27-kube-api-access-zxr8p\") pod \"root-account-create-update-dhrzg\" (UID: \"698a5a7f-70fd-4a7f-abbb-6c648b30bd27\") " pod="openstack/root-account-create-update-dhrzg" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.997896 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/698a5a7f-70fd-4a7f-abbb-6c648b30bd27-operator-scripts\") pod \"root-account-create-update-dhrzg\" (UID: \"698a5a7f-70fd-4a7f-abbb-6c648b30bd27\") " pod="openstack/root-account-create-update-dhrzg" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.997968 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxr8p\" (UniqueName: \"kubernetes.io/projected/698a5a7f-70fd-4a7f-abbb-6c648b30bd27-kube-api-access-zxr8p\") pod \"root-account-create-update-dhrzg\" (UID: \"698a5a7f-70fd-4a7f-abbb-6c648b30bd27\") " pod="openstack/root-account-create-update-dhrzg" Feb 02 11:17:06 crc kubenswrapper[4925]: I0202 11:17:06.998889 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/698a5a7f-70fd-4a7f-abbb-6c648b30bd27-operator-scripts\") pod \"root-account-create-update-dhrzg\" (UID: \"698a5a7f-70fd-4a7f-abbb-6c648b30bd27\") " pod="openstack/root-account-create-update-dhrzg" Feb 02 11:17:07 crc kubenswrapper[4925]: I0202 11:17:07.020345 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxr8p\" (UniqueName: \"kubernetes.io/projected/698a5a7f-70fd-4a7f-abbb-6c648b30bd27-kube-api-access-zxr8p\") pod \"root-account-create-update-dhrzg\" (UID: \"698a5a7f-70fd-4a7f-abbb-6c648b30bd27\") " pod="openstack/root-account-create-update-dhrzg" Feb 02 11:17:07 crc kubenswrapper[4925]: I0202 11:17:07.151027 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dhrzg" Feb 02 11:17:07 crc kubenswrapper[4925]: I0202 11:17:07.211895 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gr5rf-config-l54xc" event={"ID":"a05798ea-e028-47df-9773-41ee87d76c15","Type":"ContainerStarted","Data":"c4990471ae57ad6e9699ea8b1ddf3dbc6bfd7a2584b6321c1e41b4a118f67ead"} Feb 02 11:17:07 crc kubenswrapper[4925]: I0202 11:17:07.230847 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-cvgcr" podStartSLOduration=8.230822563 podStartE2EDuration="8.230822563s" podCreationTimestamp="2026-02-02 11:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:17:07.227596307 +0000 UTC m=+1204.231845269" watchObservedRunningTime="2026-02-02 11:17:07.230822563 +0000 UTC m=+1204.235071525" Feb 02 11:17:07 crc kubenswrapper[4925]: I0202 11:17:07.249103 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-tz6vw" podStartSLOduration=8.249058979 podStartE2EDuration="8.249058979s" podCreationTimestamp="2026-02-02 11:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:17:07.242054172 +0000 UTC m=+1204.246303144" watchObservedRunningTime="2026-02-02 11:17:07.249058979 +0000 UTC m=+1204.253307941" Feb 02 11:17:07 crc kubenswrapper[4925]: I0202 11:17:07.264102 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-b182-account-create-update-sxbt4" podStartSLOduration=8.264024588 podStartE2EDuration="8.264024588s" podCreationTimestamp="2026-02-02 11:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:17:07.258217143 +0000 UTC m=+1204.262466125" watchObservedRunningTime="2026-02-02 11:17:07.264024588 +0000 UTC m=+1204.268273560" Feb 02 11:17:07 crc kubenswrapper[4925]: I0202 11:17:07.593490 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dhrzg"] Feb 02 11:17:07 crc kubenswrapper[4925]: I0202 11:17:07.878327 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 02 11:17:08 crc kubenswrapper[4925]: I0202 11:17:08.219030 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dhrzg" event={"ID":"698a5a7f-70fd-4a7f-abbb-6c648b30bd27","Type":"ContainerStarted","Data":"e6be7289a6eb15dd42df3bb2a0c7be0bd84edbcdec051c7d1ee2e457abffc736"} Feb 02 11:17:09 crc kubenswrapper[4925]: I0202 11:17:09.229671 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"435dc982-a475-4753-81d0-58bff20a6f17","Type":"ContainerStarted","Data":"259036d7ce6e79450ab157c656184e2e6dee06072870b59f8a294d9030a0bab1"} Feb 02 11:17:09 crc kubenswrapper[4925]: I0202 11:17:09.231399 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0d6b9691-80b3-418b-a4c7-fc80c0438123","Type":"ContainerStarted","Data":"00accac3619592cb6bc025055707340df438257bf5a585dca3067719e7b58d7d"} Feb 02 11:17:10 crc kubenswrapper[4925]: I0202 11:17:10.239267 4925 generic.go:334] "Generic (PLEG): container finished" podID="a05798ea-e028-47df-9773-41ee87d76c15" containerID="72a8c11f472ce3a902ef45f68ceac5feda305e18822dbcaf4665559f07ebfb46" exitCode=0 Feb 02 11:17:10 crc kubenswrapper[4925]: I0202 11:17:10.239589 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gr5rf-config-l54xc" event={"ID":"a05798ea-e028-47df-9773-41ee87d76c15","Type":"ContainerDied","Data":"72a8c11f472ce3a902ef45f68ceac5feda305e18822dbcaf4665559f07ebfb46"} Feb 02 11:17:10 crc kubenswrapper[4925]: I0202 11:17:10.242138 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dhrzg" event={"ID":"698a5a7f-70fd-4a7f-abbb-6c648b30bd27","Type":"ContainerStarted","Data":"81da7f4ccdacbf4428d202fbada41b16d19f6f1e62ef3cc72a5c7901f7c4b063"} Feb 02 11:17:10 crc kubenswrapper[4925]: I0202 11:17:10.242374 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:17:10 crc kubenswrapper[4925]: I0202 11:17:10.242408 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 11:17:10 crc kubenswrapper[4925]: I0202 11:17:10.288486 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-dhrzg" podStartSLOduration=4.288467272 podStartE2EDuration="4.288467272s" podCreationTimestamp="2026-02-02 11:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:17:10.283230632 +0000 UTC m=+1207.287479614" watchObservedRunningTime="2026-02-02 11:17:10.288467272 +0000 UTC m=+1207.292716234" Feb 02 11:17:10 crc kubenswrapper[4925]: I0202 11:17:10.315013 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.684286042 podStartE2EDuration="2m15.314988119s" podCreationTimestamp="2026-02-02 11:14:55 +0000 UTC" firstStartedPulling="2026-02-02 11:14:57.077804175 +0000 UTC m=+1074.082053137" lastFinishedPulling="2026-02-02 11:16:29.708506252 +0000 UTC m=+1166.712755214" observedRunningTime="2026-02-02 11:17:10.302712652 +0000 UTC m=+1207.306961624" watchObservedRunningTime="2026-02-02 11:17:10.314988119 +0000 UTC m=+1207.319237091" Feb 02 11:17:10 crc kubenswrapper[4925]: I0202 11:17:10.335449 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371901.519342 podStartE2EDuration="2m15.335433375s" podCreationTimestamp="2026-02-02 11:14:55 +0000 UTC" firstStartedPulling="2026-02-02 11:14:57.327453245 +0000 UTC m=+1074.331702207" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:17:10.334800338 +0000 UTC m=+1207.339049300" watchObservedRunningTime="2026-02-02 11:17:10.335433375 +0000 UTC m=+1207.339682337" Feb 02 11:17:10 crc kubenswrapper[4925]: I0202 11:17:10.602644 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-gr5rf" Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.256821 4925 generic.go:334] "Generic (PLEG): container finished" podID="050787f9-1101-4195-9b1b-1f1b5fa090cd" containerID="007227535bdf2c53462059f2b1f0a94457acba5fee127eb94bd29005c837b34d" exitCode=0 Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.256932 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tz6vw" event={"ID":"050787f9-1101-4195-9b1b-1f1b5fa090cd","Type":"ContainerDied","Data":"007227535bdf2c53462059f2b1f0a94457acba5fee127eb94bd29005c837b34d"} Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.259976 4925 generic.go:334] "Generic (PLEG): container finished" podID="d7159077-105a-48ca-96f6-12ffb19c7a93" containerID="c4c4ebf5346373705514add06e68c4e0277719ed17a240b284ea32573bdbb334" exitCode=0 Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.260053 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qpjpr" event={"ID":"d7159077-105a-48ca-96f6-12ffb19c7a93","Type":"ContainerDied","Data":"c4c4ebf5346373705514add06e68c4e0277719ed17a240b284ea32573bdbb334"} Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.561218 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.571920 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7blpn\" (UniqueName: \"kubernetes.io/projected/a05798ea-e028-47df-9773-41ee87d76c15-kube-api-access-7blpn\") pod \"a05798ea-e028-47df-9773-41ee87d76c15\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.571963 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a05798ea-e028-47df-9773-41ee87d76c15-additional-scripts\") pod \"a05798ea-e028-47df-9773-41ee87d76c15\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.571985 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-run\") pod \"a05798ea-e028-47df-9773-41ee87d76c15\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.572163 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-run-ovn\") pod \"a05798ea-e028-47df-9773-41ee87d76c15\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.572227 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-log-ovn\") pod \"a05798ea-e028-47df-9773-41ee87d76c15\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.572340 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a05798ea-e028-47df-9773-41ee87d76c15-scripts\") pod \"a05798ea-e028-47df-9773-41ee87d76c15\" (UID: \"a05798ea-e028-47df-9773-41ee87d76c15\") " Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.572509 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-run" (OuterVolumeSpecName: "var-run") pod "a05798ea-e028-47df-9773-41ee87d76c15" (UID: "a05798ea-e028-47df-9773-41ee87d76c15"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.573104 4925 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.573480 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a05798ea-e028-47df-9773-41ee87d76c15" (UID: "a05798ea-e028-47df-9773-41ee87d76c15"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.573495 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a05798ea-e028-47df-9773-41ee87d76c15" (UID: "a05798ea-e028-47df-9773-41ee87d76c15"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:17:11 crc kubenswrapper[4925]: E0202 11:17:11.576647 4925 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07acf1b2_28bd_41ce_bf7f_08c5b2c5ebaf.slice/crio-1f7a8a364bd2ddcd81af382722407f6853456a34c64e5002cf9943aefcebf439.scope\": RecentStats: unable to find data in memory cache]" Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.577098 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a05798ea-e028-47df-9773-41ee87d76c15-scripts" (OuterVolumeSpecName: "scripts") pod "a05798ea-e028-47df-9773-41ee87d76c15" (UID: "a05798ea-e028-47df-9773-41ee87d76c15"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.581201 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a05798ea-e028-47df-9773-41ee87d76c15-kube-api-access-7blpn" (OuterVolumeSpecName: "kube-api-access-7blpn") pod "a05798ea-e028-47df-9773-41ee87d76c15" (UID: "a05798ea-e028-47df-9773-41ee87d76c15"). InnerVolumeSpecName "kube-api-access-7blpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.666047 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a05798ea-e028-47df-9773-41ee87d76c15-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a05798ea-e028-47df-9773-41ee87d76c15" (UID: "a05798ea-e028-47df-9773-41ee87d76c15"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.673604 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7blpn\" (UniqueName: \"kubernetes.io/projected/a05798ea-e028-47df-9773-41ee87d76c15-kube-api-access-7blpn\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.673633 4925 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a05798ea-e028-47df-9773-41ee87d76c15-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.673642 4925 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.673650 4925 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a05798ea-e028-47df-9773-41ee87d76c15-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:11 crc kubenswrapper[4925]: I0202 11:17:11.673659 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a05798ea-e028-47df-9773-41ee87d76c15-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.267781 4925 generic.go:334] "Generic (PLEG): container finished" podID="07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf" containerID="1f7a8a364bd2ddcd81af382722407f6853456a34c64e5002cf9943aefcebf439" exitCode=0 Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.267870 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cvgcr" event={"ID":"07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf","Type":"ContainerDied","Data":"1f7a8a364bd2ddcd81af382722407f6853456a34c64e5002cf9943aefcebf439"} Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.269958 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gr5rf-config-l54xc" event={"ID":"a05798ea-e028-47df-9773-41ee87d76c15","Type":"ContainerDied","Data":"c4990471ae57ad6e9699ea8b1ddf3dbc6bfd7a2584b6321c1e41b4a118f67ead"} Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.270005 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4990471ae57ad6e9699ea8b1ddf3dbc6bfd7a2584b6321c1e41b4a118f67ead" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.270136 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gr5rf-config-l54xc" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.574808 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tz6vw" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.682063 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gr5rf-config-l54xc"] Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.683843 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qpjpr" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.690586 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gr5rf-config-l54xc"] Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.692502 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050787f9-1101-4195-9b1b-1f1b5fa090cd-operator-scripts\") pod \"050787f9-1101-4195-9b1b-1f1b5fa090cd\" (UID: \"050787f9-1101-4195-9b1b-1f1b5fa090cd\") " Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.692544 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbzfw\" (UniqueName: \"kubernetes.io/projected/050787f9-1101-4195-9b1b-1f1b5fa090cd-kube-api-access-pbzfw\") pod \"050787f9-1101-4195-9b1b-1f1b5fa090cd\" (UID: \"050787f9-1101-4195-9b1b-1f1b5fa090cd\") " Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.699812 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050787f9-1101-4195-9b1b-1f1b5fa090cd-kube-api-access-pbzfw" (OuterVolumeSpecName: "kube-api-access-pbzfw") pod "050787f9-1101-4195-9b1b-1f1b5fa090cd" (UID: "050787f9-1101-4195-9b1b-1f1b5fa090cd"). InnerVolumeSpecName "kube-api-access-pbzfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.700192 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/050787f9-1101-4195-9b1b-1f1b5fa090cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "050787f9-1101-4195-9b1b-1f1b5fa090cd" (UID: "050787f9-1101-4195-9b1b-1f1b5fa090cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.787489 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gr5rf-config-6wzzn"] Feb 02 11:17:12 crc kubenswrapper[4925]: E0202 11:17:12.787826 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7159077-105a-48ca-96f6-12ffb19c7a93" containerName="mariadb-database-create" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.787841 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7159077-105a-48ca-96f6-12ffb19c7a93" containerName="mariadb-database-create" Feb 02 11:17:12 crc kubenswrapper[4925]: E0202 11:17:12.787852 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050787f9-1101-4195-9b1b-1f1b5fa090cd" containerName="mariadb-database-create" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.787858 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="050787f9-1101-4195-9b1b-1f1b5fa090cd" containerName="mariadb-database-create" Feb 02 11:17:12 crc kubenswrapper[4925]: E0202 11:17:12.787883 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a05798ea-e028-47df-9773-41ee87d76c15" containerName="ovn-config" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.787889 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="a05798ea-e028-47df-9773-41ee87d76c15" containerName="ovn-config" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.788030 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="a05798ea-e028-47df-9773-41ee87d76c15" containerName="ovn-config" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.788051 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7159077-105a-48ca-96f6-12ffb19c7a93" containerName="mariadb-database-create" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.788064 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="050787f9-1101-4195-9b1b-1f1b5fa090cd" containerName="mariadb-database-create" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.788610 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.791290 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.794263 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7159077-105a-48ca-96f6-12ffb19c7a93-operator-scripts\") pod \"d7159077-105a-48ca-96f6-12ffb19c7a93\" (UID: \"d7159077-105a-48ca-96f6-12ffb19c7a93\") " Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.794440 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qdpz\" (UniqueName: \"kubernetes.io/projected/d7159077-105a-48ca-96f6-12ffb19c7a93-kube-api-access-8qdpz\") pod \"d7159077-105a-48ca-96f6-12ffb19c7a93\" (UID: \"d7159077-105a-48ca-96f6-12ffb19c7a93\") " Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.794652 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7159077-105a-48ca-96f6-12ffb19c7a93-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7159077-105a-48ca-96f6-12ffb19c7a93" (UID: "d7159077-105a-48ca-96f6-12ffb19c7a93"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.795489 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050787f9-1101-4195-9b1b-1f1b5fa090cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.795518 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbzfw\" (UniqueName: \"kubernetes.io/projected/050787f9-1101-4195-9b1b-1f1b5fa090cd-kube-api-access-pbzfw\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.795532 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7159077-105a-48ca-96f6-12ffb19c7a93-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.799940 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7159077-105a-48ca-96f6-12ffb19c7a93-kube-api-access-8qdpz" (OuterVolumeSpecName: "kube-api-access-8qdpz") pod "d7159077-105a-48ca-96f6-12ffb19c7a93" (UID: "d7159077-105a-48ca-96f6-12ffb19c7a93"). InnerVolumeSpecName "kube-api-access-8qdpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.809622 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gr5rf-config-6wzzn"] Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.896684 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4263f89a-7369-4226-993b-7ce8f083123a-scripts\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.896752 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4263f89a-7369-4226-993b-7ce8f083123a-additional-scripts\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.896778 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-run\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.896889 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lwhs\" (UniqueName: \"kubernetes.io/projected/4263f89a-7369-4226-993b-7ce8f083123a-kube-api-access-5lwhs\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.897062 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-run-ovn\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.897117 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-log-ovn\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.897244 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qdpz\" (UniqueName: \"kubernetes.io/projected/d7159077-105a-48ca-96f6-12ffb19c7a93-kube-api-access-8qdpz\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.999138 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-run-ovn\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.999190 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-log-ovn\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.999234 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4263f89a-7369-4226-993b-7ce8f083123a-scripts\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.999265 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4263f89a-7369-4226-993b-7ce8f083123a-additional-scripts\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.999283 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-run\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.999322 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lwhs\" (UniqueName: \"kubernetes.io/projected/4263f89a-7369-4226-993b-7ce8f083123a-kube-api-access-5lwhs\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.999558 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-run-ovn\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.999711 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-log-ovn\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:12 crc kubenswrapper[4925]: I0202 11:17:12.999711 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-run\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.000448 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4263f89a-7369-4226-993b-7ce8f083123a-additional-scripts\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.001972 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4263f89a-7369-4226-993b-7ce8f083123a-scripts\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.018410 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lwhs\" (UniqueName: \"kubernetes.io/projected/4263f89a-7369-4226-993b-7ce8f083123a-kube-api-access-5lwhs\") pod \"ovn-controller-gr5rf-config-6wzzn\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.107008 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.288835 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tz6vw" Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.289333 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tz6vw" event={"ID":"050787f9-1101-4195-9b1b-1f1b5fa090cd","Type":"ContainerDied","Data":"3e3b716a373f96478130c474cb347f78b8ff00a9df2c62fe2ef14b798edc03bc"} Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.289429 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e3b716a373f96478130c474cb347f78b8ff00a9df2c62fe2ef14b798edc03bc" Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.295633 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qpjpr" Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.297236 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qpjpr" event={"ID":"d7159077-105a-48ca-96f6-12ffb19c7a93","Type":"ContainerDied","Data":"a8c447f47dc87fc9d685e9be308ca6067a46277146b6a9fbe260bbe8ae79a837"} Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.297289 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8c447f47dc87fc9d685e9be308ca6067a46277146b6a9fbe260bbe8ae79a837" Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.398799 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.398853 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.398902 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.399579 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc20c1950a2aee33db5a561db4bbc78e34cfd4881473af054b6cd76fb628d232"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.399637 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://dc20c1950a2aee33db5a561db4bbc78e34cfd4881473af054b6cd76fb628d232" gracePeriod=600 Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.585506 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gr5rf-config-6wzzn"] Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.692151 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cvgcr" Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.817554 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf-operator-scripts\") pod \"07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf\" (UID: \"07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf\") " Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.817752 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwfzl\" (UniqueName: \"kubernetes.io/projected/07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf-kube-api-access-xwfzl\") pod \"07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf\" (UID: \"07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf\") " Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.821615 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf" (UID: "07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.827163 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf-kube-api-access-xwfzl" (OuterVolumeSpecName: "kube-api-access-xwfzl") pod "07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf" (UID: "07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf"). InnerVolumeSpecName "kube-api-access-xwfzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.919837 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwfzl\" (UniqueName: \"kubernetes.io/projected/07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf-kube-api-access-xwfzl\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:13 crc kubenswrapper[4925]: I0202 11:17:13.919879 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:14 crc kubenswrapper[4925]: I0202 11:17:14.304110 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="dc20c1950a2aee33db5a561db4bbc78e34cfd4881473af054b6cd76fb628d232" exitCode=0 Feb 02 11:17:14 crc kubenswrapper[4925]: I0202 11:17:14.304127 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"dc20c1950a2aee33db5a561db4bbc78e34cfd4881473af054b6cd76fb628d232"} Feb 02 11:17:14 crc kubenswrapper[4925]: I0202 11:17:14.304659 4925 scope.go:117] "RemoveContainer" containerID="4b03a1975ff91abe6f92e545f0ab1b94a8a292e0264c3f7e53cacd130fa2f25b" Feb 02 11:17:14 crc kubenswrapper[4925]: I0202 11:17:14.306361 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gr5rf-config-6wzzn" event={"ID":"4263f89a-7369-4226-993b-7ce8f083123a","Type":"ContainerStarted","Data":"d602cb425a29481929a73de6f2f934a7c5603c9fcbe7e82aa6fd14f447cac08e"} Feb 02 11:17:14 crc kubenswrapper[4925]: I0202 11:17:14.306394 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gr5rf-config-6wzzn" event={"ID":"4263f89a-7369-4226-993b-7ce8f083123a","Type":"ContainerStarted","Data":"60f22e088af570e42ad0bc31535f72c980d2898fe35cace864d8d7b41afecb2b"} Feb 02 11:17:14 crc kubenswrapper[4925]: I0202 11:17:14.308645 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cvgcr" event={"ID":"07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf","Type":"ContainerDied","Data":"d21fd62010b566bd46735386ea7aa31c4d22d6ac70ad56c715d2e03a7ecebd3c"} Feb 02 11:17:14 crc kubenswrapper[4925]: I0202 11:17:14.308774 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cvgcr" Feb 02 11:17:14 crc kubenswrapper[4925]: I0202 11:17:14.309169 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d21fd62010b566bd46735386ea7aa31c4d22d6ac70ad56c715d2e03a7ecebd3c" Feb 02 11:17:14 crc kubenswrapper[4925]: I0202 11:17:14.331157 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gr5rf-config-6wzzn" podStartSLOduration=2.331139855 podStartE2EDuration="2.331139855s" podCreationTimestamp="2026-02-02 11:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:17:14.321751535 +0000 UTC m=+1211.326000507" watchObservedRunningTime="2026-02-02 11:17:14.331139855 +0000 UTC m=+1211.335388817" Feb 02 11:17:14 crc kubenswrapper[4925]: I0202 11:17:14.676688 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a05798ea-e028-47df-9773-41ee87d76c15" path="/var/lib/kubelet/pods/a05798ea-e028-47df-9773-41ee87d76c15/volumes" Feb 02 11:17:15 crc kubenswrapper[4925]: I0202 11:17:15.316972 4925 generic.go:334] "Generic (PLEG): container finished" podID="4263f89a-7369-4226-993b-7ce8f083123a" containerID="d602cb425a29481929a73de6f2f934a7c5603c9fcbe7e82aa6fd14f447cac08e" exitCode=0 Feb 02 11:17:15 crc kubenswrapper[4925]: I0202 11:17:15.317043 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gr5rf-config-6wzzn" event={"ID":"4263f89a-7369-4226-993b-7ce8f083123a","Type":"ContainerDied","Data":"d602cb425a29481929a73de6f2f934a7c5603c9fcbe7e82aa6fd14f447cac08e"} Feb 02 11:17:15 crc kubenswrapper[4925]: I0202 11:17:15.318771 4925 generic.go:334] "Generic (PLEG): container finished" podID="db5a7454-06cf-426b-93f7-8d0c2b0a27d2" containerID="ef83eeb42f23f8ad93df1ed6b1afc34c5562730eba18eaed7322434108d17424" exitCode=0 Feb 02 11:17:15 crc kubenswrapper[4925]: I0202 11:17:15.318844 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a0ad-account-create-update-5n6nq" event={"ID":"db5a7454-06cf-426b-93f7-8d0c2b0a27d2","Type":"ContainerDied","Data":"ef83eeb42f23f8ad93df1ed6b1afc34c5562730eba18eaed7322434108d17424"} Feb 02 11:17:15 crc kubenswrapper[4925]: I0202 11:17:15.322445 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"66621d3a93bf4a19f7e5b6564542e797798b65c2f056111ba9523d20399b11ef"} Feb 02 11:17:16 crc kubenswrapper[4925]: I0202 11:17:16.330178 4925 generic.go:334] "Generic (PLEG): container finished" podID="698a5a7f-70fd-4a7f-abbb-6c648b30bd27" containerID="81da7f4ccdacbf4428d202fbada41b16d19f6f1e62ef3cc72a5c7901f7c4b063" exitCode=0 Feb 02 11:17:16 crc kubenswrapper[4925]: I0202 11:17:16.330336 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dhrzg" event={"ID":"698a5a7f-70fd-4a7f-abbb-6c648b30bd27","Type":"ContainerDied","Data":"81da7f4ccdacbf4428d202fbada41b16d19f6f1e62ef3cc72a5c7901f7c4b063"} Feb 02 11:17:16 crc kubenswrapper[4925]: I0202 11:17:16.333801 4925 generic.go:334] "Generic (PLEG): container finished" podID="e5081cd7-3a75-408b-a30d-eb1156689a3d" containerID="5eda4a561543db11e81df459f7c307989e1184aa91e335a2b63c30a73bb3bb61" exitCode=0 Feb 02 11:17:16 crc kubenswrapper[4925]: I0202 11:17:16.333897 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-efb2-account-create-update-tnvpf" event={"ID":"e5081cd7-3a75-408b-a30d-eb1156689a3d","Type":"ContainerDied","Data":"5eda4a561543db11e81df459f7c307989e1184aa91e335a2b63c30a73bb3bb61"} Feb 02 11:17:16 crc kubenswrapper[4925]: I0202 11:17:16.335836 4925 generic.go:334] "Generic (PLEG): container finished" podID="7070fb66-e433-4f35-b5a7-b4666e8c0638" containerID="d48df7b05a145a6ec40fca6d9424318169933e1edae8259e9729c18784d17f67" exitCode=0 Feb 02 11:17:16 crc kubenswrapper[4925]: I0202 11:17:16.335960 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b182-account-create-update-sxbt4" event={"ID":"7070fb66-e433-4f35-b5a7-b4666e8c0638","Type":"ContainerDied","Data":"d48df7b05a145a6ec40fca6d9424318169933e1edae8259e9729c18784d17f67"} Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.318773 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.329519 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a0ad-account-create-update-5n6nq" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.355213 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gr5rf-config-6wzzn" event={"ID":"4263f89a-7369-4226-993b-7ce8f083123a","Type":"ContainerDied","Data":"60f22e088af570e42ad0bc31535f72c980d2898fe35cace864d8d7b41afecb2b"} Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.355265 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60f22e088af570e42ad0bc31535f72c980d2898fe35cace864d8d7b41afecb2b" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.355336 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gr5rf-config-6wzzn" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.377549 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lwhs\" (UniqueName: \"kubernetes.io/projected/4263f89a-7369-4226-993b-7ce8f083123a-kube-api-access-5lwhs\") pod \"4263f89a-7369-4226-993b-7ce8f083123a\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.377890 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-run-ovn\") pod \"4263f89a-7369-4226-993b-7ce8f083123a\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.378019 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4263f89a-7369-4226-993b-7ce8f083123a-scripts\") pod \"4263f89a-7369-4226-993b-7ce8f083123a\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.378129 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-log-ovn\") pod \"4263f89a-7369-4226-993b-7ce8f083123a\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.378240 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5a7454-06cf-426b-93f7-8d0c2b0a27d2-operator-scripts\") pod \"db5a7454-06cf-426b-93f7-8d0c2b0a27d2\" (UID: \"db5a7454-06cf-426b-93f7-8d0c2b0a27d2\") " Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.378375 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-run\") pod \"4263f89a-7369-4226-993b-7ce8f083123a\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.378504 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km828\" (UniqueName: \"kubernetes.io/projected/db5a7454-06cf-426b-93f7-8d0c2b0a27d2-kube-api-access-km828\") pod \"db5a7454-06cf-426b-93f7-8d0c2b0a27d2\" (UID: \"db5a7454-06cf-426b-93f7-8d0c2b0a27d2\") " Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.378597 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4263f89a-7369-4226-993b-7ce8f083123a-additional-scripts\") pod \"4263f89a-7369-4226-993b-7ce8f083123a\" (UID: \"4263f89a-7369-4226-993b-7ce8f083123a\") " Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.379399 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4263f89a-7369-4226-993b-7ce8f083123a" (UID: "4263f89a-7369-4226-993b-7ce8f083123a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.380379 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4263f89a-7369-4226-993b-7ce8f083123a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4263f89a-7369-4226-993b-7ce8f083123a" (UID: "4263f89a-7369-4226-993b-7ce8f083123a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.381011 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db5a7454-06cf-426b-93f7-8d0c2b0a27d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db5a7454-06cf-426b-93f7-8d0c2b0a27d2" (UID: "db5a7454-06cf-426b-93f7-8d0c2b0a27d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.381140 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-run" (OuterVolumeSpecName: "var-run") pod "4263f89a-7369-4226-993b-7ce8f083123a" (UID: "4263f89a-7369-4226-993b-7ce8f083123a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.381352 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4263f89a-7369-4226-993b-7ce8f083123a" (UID: "4263f89a-7369-4226-993b-7ce8f083123a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.383950 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4263f89a-7369-4226-993b-7ce8f083123a-scripts" (OuterVolumeSpecName: "scripts") pod "4263f89a-7369-4226-993b-7ce8f083123a" (UID: "4263f89a-7369-4226-993b-7ce8f083123a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.387849 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db5a7454-06cf-426b-93f7-8d0c2b0a27d2-kube-api-access-km828" (OuterVolumeSpecName: "kube-api-access-km828") pod "db5a7454-06cf-426b-93f7-8d0c2b0a27d2" (UID: "db5a7454-06cf-426b-93f7-8d0c2b0a27d2"). InnerVolumeSpecName "kube-api-access-km828". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.388006 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a0ad-account-create-update-5n6nq" event={"ID":"db5a7454-06cf-426b-93f7-8d0c2b0a27d2","Type":"ContainerDied","Data":"dad04db3ee442aa4dc2244e2e6560f33c536ca1324b90626d5940cec99a72291"} Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.388243 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dad04db3ee442aa4dc2244e2e6560f33c536ca1324b90626d5940cec99a72291" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.388178 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a0ad-account-create-update-5n6nq" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.390404 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4263f89a-7369-4226-993b-7ce8f083123a-kube-api-access-5lwhs" (OuterVolumeSpecName: "kube-api-access-5lwhs") pod "4263f89a-7369-4226-993b-7ce8f083123a" (UID: "4263f89a-7369-4226-993b-7ce8f083123a"). InnerVolumeSpecName "kube-api-access-5lwhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.425898 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gr5rf-config-6wzzn"] Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.441038 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gr5rf-config-6wzzn"] Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.480458 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lwhs\" (UniqueName: \"kubernetes.io/projected/4263f89a-7369-4226-993b-7ce8f083123a-kube-api-access-5lwhs\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.480509 4925 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.480521 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4263f89a-7369-4226-993b-7ce8f083123a-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.480529 4925 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.480537 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5a7454-06cf-426b-93f7-8d0c2b0a27d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.480546 4925 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4263f89a-7369-4226-993b-7ce8f083123a-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.480554 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km828\" (UniqueName: \"kubernetes.io/projected/db5a7454-06cf-426b-93f7-8d0c2b0a27d2-kube-api-access-km828\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:17 crc kubenswrapper[4925]: I0202 11:17:17.480561 4925 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4263f89a-7369-4226-993b-7ce8f083123a-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:18 crc kubenswrapper[4925]: I0202 11:17:18.493133 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b182-account-create-update-sxbt4" Feb 02 11:17:18 crc kubenswrapper[4925]: I0202 11:17:18.503583 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dhrzg" Feb 02 11:17:18 crc kubenswrapper[4925]: I0202 11:17:18.596476 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5t9z\" (UniqueName: \"kubernetes.io/projected/7070fb66-e433-4f35-b5a7-b4666e8c0638-kube-api-access-v5t9z\") pod \"7070fb66-e433-4f35-b5a7-b4666e8c0638\" (UID: \"7070fb66-e433-4f35-b5a7-b4666e8c0638\") " Feb 02 11:17:18 crc kubenswrapper[4925]: I0202 11:17:18.596616 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7070fb66-e433-4f35-b5a7-b4666e8c0638-operator-scripts\") pod \"7070fb66-e433-4f35-b5a7-b4666e8c0638\" (UID: \"7070fb66-e433-4f35-b5a7-b4666e8c0638\") " Feb 02 11:17:18 crc kubenswrapper[4925]: I0202 11:17:18.596644 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/698a5a7f-70fd-4a7f-abbb-6c648b30bd27-operator-scripts\") pod \"698a5a7f-70fd-4a7f-abbb-6c648b30bd27\" (UID: \"698a5a7f-70fd-4a7f-abbb-6c648b30bd27\") " Feb 02 11:17:18 crc kubenswrapper[4925]: I0202 11:17:18.598175 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/698a5a7f-70fd-4a7f-abbb-6c648b30bd27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "698a5a7f-70fd-4a7f-abbb-6c648b30bd27" (UID: "698a5a7f-70fd-4a7f-abbb-6c648b30bd27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:18 crc kubenswrapper[4925]: I0202 11:17:18.598480 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7070fb66-e433-4f35-b5a7-b4666e8c0638-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7070fb66-e433-4f35-b5a7-b4666e8c0638" (UID: "7070fb66-e433-4f35-b5a7-b4666e8c0638"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:18 crc kubenswrapper[4925]: I0202 11:17:18.603908 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7070fb66-e433-4f35-b5a7-b4666e8c0638-kube-api-access-v5t9z" (OuterVolumeSpecName: "kube-api-access-v5t9z") pod "7070fb66-e433-4f35-b5a7-b4666e8c0638" (UID: "7070fb66-e433-4f35-b5a7-b4666e8c0638"). InnerVolumeSpecName "kube-api-access-v5t9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:18 crc kubenswrapper[4925]: I0202 11:17:18.675226 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4263f89a-7369-4226-993b-7ce8f083123a" path="/var/lib/kubelet/pods/4263f89a-7369-4226-993b-7ce8f083123a/volumes" Feb 02 11:17:18 crc kubenswrapper[4925]: I0202 11:17:18.697991 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxr8p\" (UniqueName: \"kubernetes.io/projected/698a5a7f-70fd-4a7f-abbb-6c648b30bd27-kube-api-access-zxr8p\") pod \"698a5a7f-70fd-4a7f-abbb-6c648b30bd27\" (UID: \"698a5a7f-70fd-4a7f-abbb-6c648b30bd27\") " Feb 02 11:17:18 crc kubenswrapper[4925]: I0202 11:17:18.699376 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5t9z\" (UniqueName: \"kubernetes.io/projected/7070fb66-e433-4f35-b5a7-b4666e8c0638-kube-api-access-v5t9z\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:18 crc kubenswrapper[4925]: I0202 11:17:18.699420 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7070fb66-e433-4f35-b5a7-b4666e8c0638-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:18 crc kubenswrapper[4925]: I0202 11:17:18.699435 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/698a5a7f-70fd-4a7f-abbb-6c648b30bd27-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:18 crc kubenswrapper[4925]: I0202 11:17:18.700769 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/698a5a7f-70fd-4a7f-abbb-6c648b30bd27-kube-api-access-zxr8p" (OuterVolumeSpecName: "kube-api-access-zxr8p") pod "698a5a7f-70fd-4a7f-abbb-6c648b30bd27" (UID: "698a5a7f-70fd-4a7f-abbb-6c648b30bd27"). InnerVolumeSpecName "kube-api-access-zxr8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:18 crc kubenswrapper[4925]: I0202 11:17:18.800155 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxr8p\" (UniqueName: \"kubernetes.io/projected/698a5a7f-70fd-4a7f-abbb-6c648b30bd27-kube-api-access-zxr8p\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:18 crc kubenswrapper[4925]: I0202 11:17:18.957922 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-efb2-account-create-update-tnvpf" Feb 02 11:17:19 crc kubenswrapper[4925]: I0202 11:17:19.103995 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5081cd7-3a75-408b-a30d-eb1156689a3d-operator-scripts\") pod \"e5081cd7-3a75-408b-a30d-eb1156689a3d\" (UID: \"e5081cd7-3a75-408b-a30d-eb1156689a3d\") " Feb 02 11:17:19 crc kubenswrapper[4925]: I0202 11:17:19.104093 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdxng\" (UniqueName: \"kubernetes.io/projected/e5081cd7-3a75-408b-a30d-eb1156689a3d-kube-api-access-qdxng\") pod \"e5081cd7-3a75-408b-a30d-eb1156689a3d\" (UID: \"e5081cd7-3a75-408b-a30d-eb1156689a3d\") " Feb 02 11:17:19 crc kubenswrapper[4925]: I0202 11:17:19.104723 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5081cd7-3a75-408b-a30d-eb1156689a3d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5081cd7-3a75-408b-a30d-eb1156689a3d" (UID: "e5081cd7-3a75-408b-a30d-eb1156689a3d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:19 crc kubenswrapper[4925]: I0202 11:17:19.107354 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5081cd7-3a75-408b-a30d-eb1156689a3d-kube-api-access-qdxng" (OuterVolumeSpecName: "kube-api-access-qdxng") pod "e5081cd7-3a75-408b-a30d-eb1156689a3d" (UID: "e5081cd7-3a75-408b-a30d-eb1156689a3d"). InnerVolumeSpecName "kube-api-access-qdxng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:19 crc kubenswrapper[4925]: I0202 11:17:19.206807 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5081cd7-3a75-408b-a30d-eb1156689a3d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:19 crc kubenswrapper[4925]: I0202 11:17:19.206846 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdxng\" (UniqueName: \"kubernetes.io/projected/e5081cd7-3a75-408b-a30d-eb1156689a3d-kube-api-access-qdxng\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:19 crc kubenswrapper[4925]: I0202 11:17:19.405654 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b182-account-create-update-sxbt4" event={"ID":"7070fb66-e433-4f35-b5a7-b4666e8c0638","Type":"ContainerDied","Data":"a78b3ec61d0f5aaa4d47663359d53227bbcd735d3685f03530f62165e6551be9"} Feb 02 11:17:19 crc kubenswrapper[4925]: I0202 11:17:19.405694 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a78b3ec61d0f5aaa4d47663359d53227bbcd735d3685f03530f62165e6551be9" Feb 02 11:17:19 crc kubenswrapper[4925]: I0202 11:17:19.405703 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b182-account-create-update-sxbt4" Feb 02 11:17:19 crc kubenswrapper[4925]: I0202 11:17:19.407680 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dhrzg" event={"ID":"698a5a7f-70fd-4a7f-abbb-6c648b30bd27","Type":"ContainerDied","Data":"e6be7289a6eb15dd42df3bb2a0c7be0bd84edbcdec051c7d1ee2e457abffc736"} Feb 02 11:17:19 crc kubenswrapper[4925]: I0202 11:17:19.407706 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dhrzg" Feb 02 11:17:19 crc kubenswrapper[4925]: I0202 11:17:19.407713 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6be7289a6eb15dd42df3bb2a0c7be0bd84edbcdec051c7d1ee2e457abffc736" Feb 02 11:17:19 crc kubenswrapper[4925]: I0202 11:17:19.410541 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-efb2-account-create-update-tnvpf" event={"ID":"e5081cd7-3a75-408b-a30d-eb1156689a3d","Type":"ContainerDied","Data":"dd97c774647d45748dbde019e5347fe3bda92b984172a27dff757c89b1e27bb0"} Feb 02 11:17:19 crc kubenswrapper[4925]: I0202 11:17:19.410569 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd97c774647d45748dbde019e5347fe3bda92b984172a27dff757c89b1e27bb0" Feb 02 11:17:19 crc kubenswrapper[4925]: I0202 11:17:19.410585 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-efb2-account-create-update-tnvpf" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.316569 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-xtzng"] Feb 02 11:17:20 crc kubenswrapper[4925]: E0202 11:17:20.317187 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf" containerName="mariadb-database-create" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.317204 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf" containerName="mariadb-database-create" Feb 02 11:17:20 crc kubenswrapper[4925]: E0202 11:17:20.317219 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="698a5a7f-70fd-4a7f-abbb-6c648b30bd27" containerName="mariadb-account-create-update" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.317225 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="698a5a7f-70fd-4a7f-abbb-6c648b30bd27" containerName="mariadb-account-create-update" Feb 02 11:17:20 crc kubenswrapper[4925]: E0202 11:17:20.317240 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7070fb66-e433-4f35-b5a7-b4666e8c0638" containerName="mariadb-account-create-update" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.317246 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="7070fb66-e433-4f35-b5a7-b4666e8c0638" containerName="mariadb-account-create-update" Feb 02 11:17:20 crc kubenswrapper[4925]: E0202 11:17:20.317258 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4263f89a-7369-4226-993b-7ce8f083123a" containerName="ovn-config" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.317264 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="4263f89a-7369-4226-993b-7ce8f083123a" containerName="ovn-config" Feb 02 11:17:20 crc kubenswrapper[4925]: E0202 11:17:20.317275 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5081cd7-3a75-408b-a30d-eb1156689a3d" containerName="mariadb-account-create-update" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.317281 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5081cd7-3a75-408b-a30d-eb1156689a3d" containerName="mariadb-account-create-update" Feb 02 11:17:20 crc kubenswrapper[4925]: E0202 11:17:20.317296 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db5a7454-06cf-426b-93f7-8d0c2b0a27d2" containerName="mariadb-account-create-update" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.317302 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="db5a7454-06cf-426b-93f7-8d0c2b0a27d2" containerName="mariadb-account-create-update" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.317487 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="7070fb66-e433-4f35-b5a7-b4666e8c0638" containerName="mariadb-account-create-update" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.317509 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf" containerName="mariadb-database-create" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.317518 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5081cd7-3a75-408b-a30d-eb1156689a3d" containerName="mariadb-account-create-update" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.317526 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="db5a7454-06cf-426b-93f7-8d0c2b0a27d2" containerName="mariadb-account-create-update" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.317532 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="4263f89a-7369-4226-993b-7ce8f083123a" containerName="ovn-config" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.317542 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="698a5a7f-70fd-4a7f-abbb-6c648b30bd27" containerName="mariadb-account-create-update" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.318054 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xtzng" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.319732 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.319836 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-s2xj8" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.327879 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xtzng"] Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.426041 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6648\" (UniqueName: \"kubernetes.io/projected/66bbba42-9e45-446e-8042-a428a6269d08-kube-api-access-n6648\") pod \"glance-db-sync-xtzng\" (UID: \"66bbba42-9e45-446e-8042-a428a6269d08\") " pod="openstack/glance-db-sync-xtzng" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.426129 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-db-sync-config-data\") pod \"glance-db-sync-xtzng\" (UID: \"66bbba42-9e45-446e-8042-a428a6269d08\") " pod="openstack/glance-db-sync-xtzng" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.426255 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-config-data\") pod \"glance-db-sync-xtzng\" (UID: \"66bbba42-9e45-446e-8042-a428a6269d08\") " pod="openstack/glance-db-sync-xtzng" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.426366 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-combined-ca-bundle\") pod \"glance-db-sync-xtzng\" (UID: \"66bbba42-9e45-446e-8042-a428a6269d08\") " pod="openstack/glance-db-sync-xtzng" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.528326 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-combined-ca-bundle\") pod \"glance-db-sync-xtzng\" (UID: \"66bbba42-9e45-446e-8042-a428a6269d08\") " pod="openstack/glance-db-sync-xtzng" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.528647 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6648\" (UniqueName: \"kubernetes.io/projected/66bbba42-9e45-446e-8042-a428a6269d08-kube-api-access-n6648\") pod \"glance-db-sync-xtzng\" (UID: \"66bbba42-9e45-446e-8042-a428a6269d08\") " pod="openstack/glance-db-sync-xtzng" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.528690 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-db-sync-config-data\") pod \"glance-db-sync-xtzng\" (UID: \"66bbba42-9e45-446e-8042-a428a6269d08\") " pod="openstack/glance-db-sync-xtzng" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.528719 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-config-data\") pod \"glance-db-sync-xtzng\" (UID: \"66bbba42-9e45-446e-8042-a428a6269d08\") " pod="openstack/glance-db-sync-xtzng" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.532956 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-db-sync-config-data\") pod \"glance-db-sync-xtzng\" (UID: \"66bbba42-9e45-446e-8042-a428a6269d08\") " pod="openstack/glance-db-sync-xtzng" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.533467 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-combined-ca-bundle\") pod \"glance-db-sync-xtzng\" (UID: \"66bbba42-9e45-446e-8042-a428a6269d08\") " pod="openstack/glance-db-sync-xtzng" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.534027 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-config-data\") pod \"glance-db-sync-xtzng\" (UID: \"66bbba42-9e45-446e-8042-a428a6269d08\") " pod="openstack/glance-db-sync-xtzng" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.549727 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6648\" (UniqueName: \"kubernetes.io/projected/66bbba42-9e45-446e-8042-a428a6269d08-kube-api-access-n6648\") pod \"glance-db-sync-xtzng\" (UID: \"66bbba42-9e45-446e-8042-a428a6269d08\") " pod="openstack/glance-db-sync-xtzng" Feb 02 11:17:20 crc kubenswrapper[4925]: I0202 11:17:20.646225 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xtzng" Feb 02 11:17:21 crc kubenswrapper[4925]: I0202 11:17:21.165053 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xtzng"] Feb 02 11:17:21 crc kubenswrapper[4925]: I0202 11:17:21.424748 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xtzng" event={"ID":"66bbba42-9e45-446e-8042-a428a6269d08","Type":"ContainerStarted","Data":"f7080af4979bbe0fdfa4609f05f8e06ee91a3d8f6dad83bc3968b34b2df63002"} Feb 02 11:17:23 crc kubenswrapper[4925]: I0202 11:17:23.094964 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dhrzg"] Feb 02 11:17:23 crc kubenswrapper[4925]: I0202 11:17:23.111192 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dhrzg"] Feb 02 11:17:24 crc kubenswrapper[4925]: I0202 11:17:24.676302 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="698a5a7f-70fd-4a7f-abbb-6c648b30bd27" path="/var/lib/kubelet/pods/698a5a7f-70fd-4a7f-abbb-6c648b30bd27/volumes" Feb 02 11:17:26 crc kubenswrapper[4925]: I0202 11:17:26.444408 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 11:17:26 crc kubenswrapper[4925]: I0202 11:17:26.803177 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:17:26 crc kubenswrapper[4925]: I0202 11:17:26.881774 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-rx9rv"] Feb 02 11:17:26 crc kubenswrapper[4925]: I0202 11:17:26.882909 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rx9rv" Feb 02 11:17:26 crc kubenswrapper[4925]: I0202 11:17:26.891147 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rx9rv"] Feb 02 11:17:26 crc kubenswrapper[4925]: I0202 11:17:26.929205 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-b379-account-create-update-kqh5k"] Feb 02 11:17:26 crc kubenswrapper[4925]: I0202 11:17:26.934513 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b379-account-create-update-kqh5k" Feb 02 11:17:26 crc kubenswrapper[4925]: I0202 11:17:26.943111 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 11:17:26 crc kubenswrapper[4925]: I0202 11:17:26.981288 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b379-account-create-update-kqh5k"] Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.066939 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ghjw\" (UniqueName: \"kubernetes.io/projected/c0cf93f3-efad-4435-820a-f9b4631c1efd-kube-api-access-8ghjw\") pod \"barbican-b379-account-create-update-kqh5k\" (UID: \"c0cf93f3-efad-4435-820a-f9b4631c1efd\") " pod="openstack/barbican-b379-account-create-update-kqh5k" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.067013 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c-operator-scripts\") pod \"barbican-db-create-rx9rv\" (UID: \"cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c\") " pod="openstack/barbican-db-create-rx9rv" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.067051 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0cf93f3-efad-4435-820a-f9b4631c1efd-operator-scripts\") pod \"barbican-b379-account-create-update-kqh5k\" (UID: \"c0cf93f3-efad-4435-820a-f9b4631c1efd\") " pod="openstack/barbican-b379-account-create-update-kqh5k" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.067132 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6n62\" (UniqueName: \"kubernetes.io/projected/cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c-kube-api-access-s6n62\") pod \"barbican-db-create-rx9rv\" (UID: \"cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c\") " pod="openstack/barbican-db-create-rx9rv" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.087169 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-rdb26"] Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.088431 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rdb26" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.119367 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rdb26"] Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.168338 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qb2j\" (UniqueName: \"kubernetes.io/projected/53fefe30-82c4-4d41-9af2-c23e671ce91e-kube-api-access-9qb2j\") pod \"cinder-db-create-rdb26\" (UID: \"53fefe30-82c4-4d41-9af2-c23e671ce91e\") " pod="openstack/cinder-db-create-rdb26" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.168641 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ghjw\" (UniqueName: \"kubernetes.io/projected/c0cf93f3-efad-4435-820a-f9b4631c1efd-kube-api-access-8ghjw\") pod \"barbican-b379-account-create-update-kqh5k\" (UID: \"c0cf93f3-efad-4435-820a-f9b4631c1efd\") " pod="openstack/barbican-b379-account-create-update-kqh5k" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.168678 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c-operator-scripts\") pod \"barbican-db-create-rx9rv\" (UID: \"cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c\") " pod="openstack/barbican-db-create-rx9rv" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.168707 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0cf93f3-efad-4435-820a-f9b4631c1efd-operator-scripts\") pod \"barbican-b379-account-create-update-kqh5k\" (UID: \"c0cf93f3-efad-4435-820a-f9b4631c1efd\") " pod="openstack/barbican-b379-account-create-update-kqh5k" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.168755 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53fefe30-82c4-4d41-9af2-c23e671ce91e-operator-scripts\") pod \"cinder-db-create-rdb26\" (UID: \"53fefe30-82c4-4d41-9af2-c23e671ce91e\") " pod="openstack/cinder-db-create-rdb26" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.168773 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6n62\" (UniqueName: \"kubernetes.io/projected/cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c-kube-api-access-s6n62\") pod \"barbican-db-create-rx9rv\" (UID: \"cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c\") " pod="openstack/barbican-db-create-rx9rv" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.169959 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c-operator-scripts\") pod \"barbican-db-create-rx9rv\" (UID: \"cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c\") " pod="openstack/barbican-db-create-rx9rv" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.170192 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0cf93f3-efad-4435-820a-f9b4631c1efd-operator-scripts\") pod \"barbican-b379-account-create-update-kqh5k\" (UID: \"c0cf93f3-efad-4435-820a-f9b4631c1efd\") " pod="openstack/barbican-b379-account-create-update-kqh5k" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.198310 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ghjw\" (UniqueName: \"kubernetes.io/projected/c0cf93f3-efad-4435-820a-f9b4631c1efd-kube-api-access-8ghjw\") pod \"barbican-b379-account-create-update-kqh5k\" (UID: \"c0cf93f3-efad-4435-820a-f9b4631c1efd\") " pod="openstack/barbican-b379-account-create-update-kqh5k" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.201275 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c6e8-account-create-update-s5hwt"] Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.203835 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c6e8-account-create-update-s5hwt" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.207138 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.210660 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6n62\" (UniqueName: \"kubernetes.io/projected/cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c-kube-api-access-s6n62\") pod \"barbican-db-create-rx9rv\" (UID: \"cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c\") " pod="openstack/barbican-db-create-rx9rv" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.221945 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c6e8-account-create-update-s5hwt"] Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.232106 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rx9rv" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.272676 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b379-account-create-update-kqh5k" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.273643 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53fefe30-82c4-4d41-9af2-c23e671ce91e-operator-scripts\") pod \"cinder-db-create-rdb26\" (UID: \"53fefe30-82c4-4d41-9af2-c23e671ce91e\") " pod="openstack/cinder-db-create-rdb26" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.273908 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf8z8\" (UniqueName: \"kubernetes.io/projected/2cdfffad-8b9e-41d8-b2da-a72d965d36a0-kube-api-access-lf8z8\") pod \"cinder-c6e8-account-create-update-s5hwt\" (UID: \"2cdfffad-8b9e-41d8-b2da-a72d965d36a0\") " pod="openstack/cinder-c6e8-account-create-update-s5hwt" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.274029 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cdfffad-8b9e-41d8-b2da-a72d965d36a0-operator-scripts\") pod \"cinder-c6e8-account-create-update-s5hwt\" (UID: \"2cdfffad-8b9e-41d8-b2da-a72d965d36a0\") " pod="openstack/cinder-c6e8-account-create-update-s5hwt" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.274217 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qb2j\" (UniqueName: \"kubernetes.io/projected/53fefe30-82c4-4d41-9af2-c23e671ce91e-kube-api-access-9qb2j\") pod \"cinder-db-create-rdb26\" (UID: \"53fefe30-82c4-4d41-9af2-c23e671ce91e\") " pod="openstack/cinder-db-create-rdb26" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.274531 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53fefe30-82c4-4d41-9af2-c23e671ce91e-operator-scripts\") pod \"cinder-db-create-rdb26\" (UID: \"53fefe30-82c4-4d41-9af2-c23e671ce91e\") " pod="openstack/cinder-db-create-rdb26" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.291226 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-rvhtj"] Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.292622 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rvhtj" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.298227 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.298489 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.298522 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.298567 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wkg8f" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.317129 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rvhtj"] Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.352325 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-rclxc"] Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.353667 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rclxc" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.379535 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3b7878-e41d-4ea8-b866-f34a48455d29-config-data\") pod \"keystone-db-sync-rvhtj\" (UID: \"de3b7878-e41d-4ea8-b866-f34a48455d29\") " pod="openstack/keystone-db-sync-rvhtj" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.379661 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3b7878-e41d-4ea8-b866-f34a48455d29-combined-ca-bundle\") pod \"keystone-db-sync-rvhtj\" (UID: \"de3b7878-e41d-4ea8-b866-f34a48455d29\") " pod="openstack/keystone-db-sync-rvhtj" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.379734 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf8z8\" (UniqueName: \"kubernetes.io/projected/2cdfffad-8b9e-41d8-b2da-a72d965d36a0-kube-api-access-lf8z8\") pod \"cinder-c6e8-account-create-update-s5hwt\" (UID: \"2cdfffad-8b9e-41d8-b2da-a72d965d36a0\") " pod="openstack/cinder-c6e8-account-create-update-s5hwt" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.379759 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cdfffad-8b9e-41d8-b2da-a72d965d36a0-operator-scripts\") pod \"cinder-c6e8-account-create-update-s5hwt\" (UID: \"2cdfffad-8b9e-41d8-b2da-a72d965d36a0\") " pod="openstack/cinder-c6e8-account-create-update-s5hwt" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.379804 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q5tt\" (UniqueName: \"kubernetes.io/projected/de3b7878-e41d-4ea8-b866-f34a48455d29-kube-api-access-9q5tt\") pod \"keystone-db-sync-rvhtj\" (UID: \"de3b7878-e41d-4ea8-b866-f34a48455d29\") " pod="openstack/keystone-db-sync-rvhtj" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.380054 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rclxc"] Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.381224 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cdfffad-8b9e-41d8-b2da-a72d965d36a0-operator-scripts\") pod \"cinder-c6e8-account-create-update-s5hwt\" (UID: \"2cdfffad-8b9e-41d8-b2da-a72d965d36a0\") " pod="openstack/cinder-c6e8-account-create-update-s5hwt" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.393933 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qb2j\" (UniqueName: \"kubernetes.io/projected/53fefe30-82c4-4d41-9af2-c23e671ce91e-kube-api-access-9qb2j\") pod \"cinder-db-create-rdb26\" (UID: \"53fefe30-82c4-4d41-9af2-c23e671ce91e\") " pod="openstack/cinder-db-create-rdb26" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.417730 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf8z8\" (UniqueName: \"kubernetes.io/projected/2cdfffad-8b9e-41d8-b2da-a72d965d36a0-kube-api-access-lf8z8\") pod \"cinder-c6e8-account-create-update-s5hwt\" (UID: \"2cdfffad-8b9e-41d8-b2da-a72d965d36a0\") " pod="openstack/cinder-c6e8-account-create-update-s5hwt" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.434004 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rdb26" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.469033 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-da80-account-create-update-2hpp2"] Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.470247 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-da80-account-create-update-2hpp2" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.473456 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.480886 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q5tt\" (UniqueName: \"kubernetes.io/projected/de3b7878-e41d-4ea8-b866-f34a48455d29-kube-api-access-9q5tt\") pod \"keystone-db-sync-rvhtj\" (UID: \"de3b7878-e41d-4ea8-b866-f34a48455d29\") " pod="openstack/keystone-db-sync-rvhtj" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.481036 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6bcr\" (UniqueName: \"kubernetes.io/projected/fab8fa26-070e-4295-bbf4-b44a994f2ba0-kube-api-access-b6bcr\") pod \"neutron-db-create-rclxc\" (UID: \"fab8fa26-070e-4295-bbf4-b44a994f2ba0\") " pod="openstack/neutron-db-create-rclxc" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.481097 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3b7878-e41d-4ea8-b866-f34a48455d29-config-data\") pod \"keystone-db-sync-rvhtj\" (UID: \"de3b7878-e41d-4ea8-b866-f34a48455d29\") " pod="openstack/keystone-db-sync-rvhtj" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.481168 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fab8fa26-070e-4295-bbf4-b44a994f2ba0-operator-scripts\") pod \"neutron-db-create-rclxc\" (UID: \"fab8fa26-070e-4295-bbf4-b44a994f2ba0\") " pod="openstack/neutron-db-create-rclxc" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.481211 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3b7878-e41d-4ea8-b866-f34a48455d29-combined-ca-bundle\") pod \"keystone-db-sync-rvhtj\" (UID: \"de3b7878-e41d-4ea8-b866-f34a48455d29\") " pod="openstack/keystone-db-sync-rvhtj" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.485936 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3b7878-e41d-4ea8-b866-f34a48455d29-config-data\") pod \"keystone-db-sync-rvhtj\" (UID: \"de3b7878-e41d-4ea8-b866-f34a48455d29\") " pod="openstack/keystone-db-sync-rvhtj" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.487111 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3b7878-e41d-4ea8-b866-f34a48455d29-combined-ca-bundle\") pod \"keystone-db-sync-rvhtj\" (UID: \"de3b7878-e41d-4ea8-b866-f34a48455d29\") " pod="openstack/keystone-db-sync-rvhtj" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.490277 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-da80-account-create-update-2hpp2"] Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.513441 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q5tt\" (UniqueName: \"kubernetes.io/projected/de3b7878-e41d-4ea8-b866-f34a48455d29-kube-api-access-9q5tt\") pod \"keystone-db-sync-rvhtj\" (UID: \"de3b7878-e41d-4ea8-b866-f34a48455d29\") " pod="openstack/keystone-db-sync-rvhtj" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.583537 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6bcr\" (UniqueName: \"kubernetes.io/projected/fab8fa26-070e-4295-bbf4-b44a994f2ba0-kube-api-access-b6bcr\") pod \"neutron-db-create-rclxc\" (UID: \"fab8fa26-070e-4295-bbf4-b44a994f2ba0\") " pod="openstack/neutron-db-create-rclxc" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.583594 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f03ccf8f-b9cc-4352-8c6b-7a5705807701-operator-scripts\") pod \"neutron-da80-account-create-update-2hpp2\" (UID: \"f03ccf8f-b9cc-4352-8c6b-7a5705807701\") " pod="openstack/neutron-da80-account-create-update-2hpp2" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.583668 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fab8fa26-070e-4295-bbf4-b44a994f2ba0-operator-scripts\") pod \"neutron-db-create-rclxc\" (UID: \"fab8fa26-070e-4295-bbf4-b44a994f2ba0\") " pod="openstack/neutron-db-create-rclxc" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.583718 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rjmp\" (UniqueName: \"kubernetes.io/projected/f03ccf8f-b9cc-4352-8c6b-7a5705807701-kube-api-access-6rjmp\") pod \"neutron-da80-account-create-update-2hpp2\" (UID: \"f03ccf8f-b9cc-4352-8c6b-7a5705807701\") " pod="openstack/neutron-da80-account-create-update-2hpp2" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.584662 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fab8fa26-070e-4295-bbf4-b44a994f2ba0-operator-scripts\") pod \"neutron-db-create-rclxc\" (UID: \"fab8fa26-070e-4295-bbf4-b44a994f2ba0\") " pod="openstack/neutron-db-create-rclxc" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.602646 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6bcr\" (UniqueName: \"kubernetes.io/projected/fab8fa26-070e-4295-bbf4-b44a994f2ba0-kube-api-access-b6bcr\") pod \"neutron-db-create-rclxc\" (UID: \"fab8fa26-070e-4295-bbf4-b44a994f2ba0\") " pod="openstack/neutron-db-create-rclxc" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.609720 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c6e8-account-create-update-s5hwt" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.685447 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f03ccf8f-b9cc-4352-8c6b-7a5705807701-operator-scripts\") pod \"neutron-da80-account-create-update-2hpp2\" (UID: \"f03ccf8f-b9cc-4352-8c6b-7a5705807701\") " pod="openstack/neutron-da80-account-create-update-2hpp2" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.685596 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rjmp\" (UniqueName: \"kubernetes.io/projected/f03ccf8f-b9cc-4352-8c6b-7a5705807701-kube-api-access-6rjmp\") pod \"neutron-da80-account-create-update-2hpp2\" (UID: \"f03ccf8f-b9cc-4352-8c6b-7a5705807701\") " pod="openstack/neutron-da80-account-create-update-2hpp2" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.686293 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f03ccf8f-b9cc-4352-8c6b-7a5705807701-operator-scripts\") pod \"neutron-da80-account-create-update-2hpp2\" (UID: \"f03ccf8f-b9cc-4352-8c6b-7a5705807701\") " pod="openstack/neutron-da80-account-create-update-2hpp2" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.695452 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rvhtj" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.713216 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rjmp\" (UniqueName: \"kubernetes.io/projected/f03ccf8f-b9cc-4352-8c6b-7a5705807701-kube-api-access-6rjmp\") pod \"neutron-da80-account-create-update-2hpp2\" (UID: \"f03ccf8f-b9cc-4352-8c6b-7a5705807701\") " pod="openstack/neutron-da80-account-create-update-2hpp2" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.741899 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rclxc" Feb 02 11:17:27 crc kubenswrapper[4925]: I0202 11:17:27.822659 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-da80-account-create-update-2hpp2" Feb 02 11:17:28 crc kubenswrapper[4925]: I0202 11:17:28.109680 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xmpnn"] Feb 02 11:17:28 crc kubenswrapper[4925]: I0202 11:17:28.110700 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xmpnn" Feb 02 11:17:28 crc kubenswrapper[4925]: I0202 11:17:28.112879 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 02 11:17:28 crc kubenswrapper[4925]: I0202 11:17:28.122490 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xmpnn"] Feb 02 11:17:28 crc kubenswrapper[4925]: I0202 11:17:28.194747 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5826e7fc-d781-46da-a2dc-09baa99ab163-operator-scripts\") pod \"root-account-create-update-xmpnn\" (UID: \"5826e7fc-d781-46da-a2dc-09baa99ab163\") " pod="openstack/root-account-create-update-xmpnn" Feb 02 11:17:28 crc kubenswrapper[4925]: I0202 11:17:28.194868 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xpqh\" (UniqueName: \"kubernetes.io/projected/5826e7fc-d781-46da-a2dc-09baa99ab163-kube-api-access-2xpqh\") pod \"root-account-create-update-xmpnn\" (UID: \"5826e7fc-d781-46da-a2dc-09baa99ab163\") " pod="openstack/root-account-create-update-xmpnn" Feb 02 11:17:28 crc kubenswrapper[4925]: I0202 11:17:28.296604 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xpqh\" (UniqueName: \"kubernetes.io/projected/5826e7fc-d781-46da-a2dc-09baa99ab163-kube-api-access-2xpqh\") pod \"root-account-create-update-xmpnn\" (UID: \"5826e7fc-d781-46da-a2dc-09baa99ab163\") " pod="openstack/root-account-create-update-xmpnn" Feb 02 11:17:28 crc kubenswrapper[4925]: I0202 11:17:28.296926 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5826e7fc-d781-46da-a2dc-09baa99ab163-operator-scripts\") pod \"root-account-create-update-xmpnn\" (UID: \"5826e7fc-d781-46da-a2dc-09baa99ab163\") " pod="openstack/root-account-create-update-xmpnn" Feb 02 11:17:28 crc kubenswrapper[4925]: I0202 11:17:28.299141 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5826e7fc-d781-46da-a2dc-09baa99ab163-operator-scripts\") pod \"root-account-create-update-xmpnn\" (UID: \"5826e7fc-d781-46da-a2dc-09baa99ab163\") " pod="openstack/root-account-create-update-xmpnn" Feb 02 11:17:28 crc kubenswrapper[4925]: I0202 11:17:28.322748 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xpqh\" (UniqueName: \"kubernetes.io/projected/5826e7fc-d781-46da-a2dc-09baa99ab163-kube-api-access-2xpqh\") pod \"root-account-create-update-xmpnn\" (UID: \"5826e7fc-d781-46da-a2dc-09baa99ab163\") " pod="openstack/root-account-create-update-xmpnn" Feb 02 11:17:28 crc kubenswrapper[4925]: I0202 11:17:28.481712 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xmpnn" Feb 02 11:17:42 crc kubenswrapper[4925]: I0202 11:17:42.922951 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rclxc"] Feb 02 11:17:43 crc kubenswrapper[4925]: W0202 11:17:43.031298 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0cf93f3_efad_4435_820a_f9b4631c1efd.slice/crio-65a5e831fab4cf6866ea595b8360ac9f901cfd4c4b59eca83b690261e9b393c8 WatchSource:0}: Error finding container 65a5e831fab4cf6866ea595b8360ac9f901cfd4c4b59eca83b690261e9b393c8: Status 404 returned error can't find the container with id 65a5e831fab4cf6866ea595b8360ac9f901cfd4c4b59eca83b690261e9b393c8 Feb 02 11:17:43 crc kubenswrapper[4925]: I0202 11:17:43.048048 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rx9rv"] Feb 02 11:17:43 crc kubenswrapper[4925]: I0202 11:17:43.059504 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b379-account-create-update-kqh5k"] Feb 02 11:17:43 crc kubenswrapper[4925]: I0202 11:17:43.085685 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-da80-account-create-update-2hpp2"] Feb 02 11:17:43 crc kubenswrapper[4925]: I0202 11:17:43.117272 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rvhtj"] Feb 02 11:17:43 crc kubenswrapper[4925]: W0202 11:17:43.123402 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cdfffad_8b9e_41d8_b2da_a72d965d36a0.slice/crio-61cfa41dc257847726d2cf9188783a1ab624579beb092e5b048d6918971a6f28 WatchSource:0}: Error finding container 61cfa41dc257847726d2cf9188783a1ab624579beb092e5b048d6918971a6f28: Status 404 returned error can't find the container with id 61cfa41dc257847726d2cf9188783a1ab624579beb092e5b048d6918971a6f28 Feb 02 11:17:43 crc kubenswrapper[4925]: W0202 11:17:43.125881 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53fefe30_82c4_4d41_9af2_c23e671ce91e.slice/crio-e6dc0f66ba82a443cb04a96b29c1bcdf5998fae6bc3e9d165288fd6da99fef6f WatchSource:0}: Error finding container e6dc0f66ba82a443cb04a96b29c1bcdf5998fae6bc3e9d165288fd6da99fef6f: Status 404 returned error can't find the container with id e6dc0f66ba82a443cb04a96b29c1bcdf5998fae6bc3e9d165288fd6da99fef6f Feb 02 11:17:43 crc kubenswrapper[4925]: I0202 11:17:43.130264 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c6e8-account-create-update-s5hwt"] Feb 02 11:17:43 crc kubenswrapper[4925]: I0202 11:17:43.137602 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rdb26"] Feb 02 11:17:43 crc kubenswrapper[4925]: W0202 11:17:43.137952 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde3b7878_e41d_4ea8_b866_f34a48455d29.slice/crio-45c3faf62fa9d26f5ea3142c09d021953eb4a7e788e9d6cc2f4a9ec6790063ad WatchSource:0}: Error finding container 45c3faf62fa9d26f5ea3142c09d021953eb4a7e788e9d6cc2f4a9ec6790063ad: Status 404 returned error can't find the container with id 45c3faf62fa9d26f5ea3142c09d021953eb4a7e788e9d6cc2f4a9ec6790063ad Feb 02 11:17:43 crc kubenswrapper[4925]: I0202 11:17:43.245957 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xmpnn"] Feb 02 11:17:43 crc kubenswrapper[4925]: W0202 11:17:43.247359 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5826e7fc_d781_46da_a2dc_09baa99ab163.slice/crio-304b3f725a4a768e30ce580b681f14b9931b681b9e0cc713a1d5e2ea2564fcda WatchSource:0}: Error finding container 304b3f725a4a768e30ce580b681f14b9931b681b9e0cc713a1d5e2ea2564fcda: Status 404 returned error can't find the container with id 304b3f725a4a768e30ce580b681f14b9931b681b9e0cc713a1d5e2ea2564fcda Feb 02 11:17:43 crc kubenswrapper[4925]: I0202 11:17:43.625042 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-da80-account-create-update-2hpp2" event={"ID":"f03ccf8f-b9cc-4352-8c6b-7a5705807701","Type":"ContainerStarted","Data":"19b4f760e9a8f2d8e753338e520fd65e88cdb7ff1fb47621d06e8ed7a8c2a17e"} Feb 02 11:17:43 crc kubenswrapper[4925]: I0202 11:17:43.626277 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rx9rv" event={"ID":"cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c","Type":"ContainerStarted","Data":"f156b0f1e2f498dbc2c2ba1773d65ca0ab88ddd2ff69f81486dcb0a2745175a5"} Feb 02 11:17:43 crc kubenswrapper[4925]: I0202 11:17:43.627700 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xmpnn" event={"ID":"5826e7fc-d781-46da-a2dc-09baa99ab163","Type":"ContainerStarted","Data":"304b3f725a4a768e30ce580b681f14b9931b681b9e0cc713a1d5e2ea2564fcda"} Feb 02 11:17:43 crc kubenswrapper[4925]: I0202 11:17:43.628864 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rvhtj" event={"ID":"de3b7878-e41d-4ea8-b866-f34a48455d29","Type":"ContainerStarted","Data":"45c3faf62fa9d26f5ea3142c09d021953eb4a7e788e9d6cc2f4a9ec6790063ad"} Feb 02 11:17:43 crc kubenswrapper[4925]: I0202 11:17:43.629992 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b379-account-create-update-kqh5k" event={"ID":"c0cf93f3-efad-4435-820a-f9b4631c1efd","Type":"ContainerStarted","Data":"65a5e831fab4cf6866ea595b8360ac9f901cfd4c4b59eca83b690261e9b393c8"} Feb 02 11:17:43 crc kubenswrapper[4925]: I0202 11:17:43.630944 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c6e8-account-create-update-s5hwt" event={"ID":"2cdfffad-8b9e-41d8-b2da-a72d965d36a0","Type":"ContainerStarted","Data":"61cfa41dc257847726d2cf9188783a1ab624579beb092e5b048d6918971a6f28"} Feb 02 11:17:43 crc kubenswrapper[4925]: I0202 11:17:43.632254 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rclxc" event={"ID":"fab8fa26-070e-4295-bbf4-b44a994f2ba0","Type":"ContainerStarted","Data":"40c3bb36d0e8a5e72bafd41685ad46c3560463f3770b1fe47dd3fe03b210f492"} Feb 02 11:17:43 crc kubenswrapper[4925]: I0202 11:17:43.632283 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rclxc" event={"ID":"fab8fa26-070e-4295-bbf4-b44a994f2ba0","Type":"ContainerStarted","Data":"8ad05949fc83bbb69d365673031dee300615a8b71a9292fd3bdd7a3289653e94"} Feb 02 11:17:43 crc kubenswrapper[4925]: I0202 11:17:43.633166 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rdb26" event={"ID":"53fefe30-82c4-4d41-9af2-c23e671ce91e","Type":"ContainerStarted","Data":"e6dc0f66ba82a443cb04a96b29c1bcdf5998fae6bc3e9d165288fd6da99fef6f"} Feb 02 11:17:43 crc kubenswrapper[4925]: E0202 11:17:43.983842 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 02 11:17:43 crc kubenswrapper[4925]: E0202 11:17:43.984000 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6648,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-xtzng_openstack(66bbba42-9e45-446e-8042-a428a6269d08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:17:43 crc kubenswrapper[4925]: E0202 11:17:43.986018 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-xtzng" podUID="66bbba42-9e45-446e-8042-a428a6269d08" Feb 02 11:17:44 crc kubenswrapper[4925]: I0202 11:17:44.642203 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rdb26" event={"ID":"53fefe30-82c4-4d41-9af2-c23e671ce91e","Type":"ContainerStarted","Data":"903ce1f065f896201bddf4ca3c2c292be4b924b8b44d523ce4c4182008d9503d"} Feb 02 11:17:44 crc kubenswrapper[4925]: I0202 11:17:44.643342 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-da80-account-create-update-2hpp2" event={"ID":"f03ccf8f-b9cc-4352-8c6b-7a5705807701","Type":"ContainerStarted","Data":"b0a56371d541f3801251455bc7b78eb139c75200719877c714f4ad75dc0ed4f5"} Feb 02 11:17:44 crc kubenswrapper[4925]: I0202 11:17:44.644669 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rx9rv" event={"ID":"cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c","Type":"ContainerStarted","Data":"766960461a2916d9e4721732fa0d331e5609fb70f6a8265acc6b4e4afd702a34"} Feb 02 11:17:44 crc kubenswrapper[4925]: I0202 11:17:44.647333 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xmpnn" event={"ID":"5826e7fc-d781-46da-a2dc-09baa99ab163","Type":"ContainerStarted","Data":"be348d929c12884d0f657b05b4c4543e50183126b26a9cfd25fb754a10960370"} Feb 02 11:17:44 crc kubenswrapper[4925]: I0202 11:17:44.649332 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b379-account-create-update-kqh5k" event={"ID":"c0cf93f3-efad-4435-820a-f9b4631c1efd","Type":"ContainerStarted","Data":"451dd8f6eec86e08c08cfb9a9a4dbadb1ba68945cf87ce057b9d8137ee776837"} Feb 02 11:17:44 crc kubenswrapper[4925]: I0202 11:17:44.651498 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c6e8-account-create-update-s5hwt" event={"ID":"2cdfffad-8b9e-41d8-b2da-a72d965d36a0","Type":"ContainerStarted","Data":"b899db659ca08d16b61d5f09b3f8cc9e00e9e475976198674ce67baa0591334f"} Feb 02 11:17:44 crc kubenswrapper[4925]: E0202 11:17:44.652041 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-xtzng" podUID="66bbba42-9e45-446e-8042-a428a6269d08" Feb 02 11:17:44 crc kubenswrapper[4925]: I0202 11:17:44.662430 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-rx9rv" podStartSLOduration=18.662409593 podStartE2EDuration="18.662409593s" podCreationTimestamp="2026-02-02 11:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:17:44.659199357 +0000 UTC m=+1241.663448319" watchObservedRunningTime="2026-02-02 11:17:44.662409593 +0000 UTC m=+1241.666658555" Feb 02 11:17:44 crc kubenswrapper[4925]: I0202 11:17:44.678829 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-b379-account-create-update-kqh5k" podStartSLOduration=18.67880732 podStartE2EDuration="18.67880732s" podCreationTimestamp="2026-02-02 11:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:17:44.678441471 +0000 UTC m=+1241.682690443" watchObservedRunningTime="2026-02-02 11:17:44.67880732 +0000 UTC m=+1241.683056282" Feb 02 11:17:44 crc kubenswrapper[4925]: I0202 11:17:44.697716 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-rclxc" podStartSLOduration=17.697698014 podStartE2EDuration="17.697698014s" podCreationTimestamp="2026-02-02 11:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:17:44.69640188 +0000 UTC m=+1241.700650842" watchObservedRunningTime="2026-02-02 11:17:44.697698014 +0000 UTC m=+1241.701946976" Feb 02 11:17:45 crc kubenswrapper[4925]: I0202 11:17:45.690387 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-xmpnn" podStartSLOduration=17.690363113 podStartE2EDuration="17.690363113s" podCreationTimestamp="2026-02-02 11:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:17:45.684411264 +0000 UTC m=+1242.688660236" watchObservedRunningTime="2026-02-02 11:17:45.690363113 +0000 UTC m=+1242.694612075" Feb 02 11:17:45 crc kubenswrapper[4925]: I0202 11:17:45.729852 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-rdb26" podStartSLOduration=18.729826695 podStartE2EDuration="18.729826695s" podCreationTimestamp="2026-02-02 11:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:17:45.729547868 +0000 UTC m=+1242.733796840" watchObservedRunningTime="2026-02-02 11:17:45.729826695 +0000 UTC m=+1242.734075657" Feb 02 11:17:45 crc kubenswrapper[4925]: I0202 11:17:45.735406 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-c6e8-account-create-update-s5hwt" podStartSLOduration=18.735391144 podStartE2EDuration="18.735391144s" podCreationTimestamp="2026-02-02 11:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:17:45.717372673 +0000 UTC m=+1242.721621635" watchObservedRunningTime="2026-02-02 11:17:45.735391144 +0000 UTC m=+1242.739640106" Feb 02 11:17:45 crc kubenswrapper[4925]: I0202 11:17:45.745664 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-da80-account-create-update-2hpp2" podStartSLOduration=18.745645237 podStartE2EDuration="18.745645237s" podCreationTimestamp="2026-02-02 11:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:17:45.744772904 +0000 UTC m=+1242.749021886" watchObservedRunningTime="2026-02-02 11:17:45.745645237 +0000 UTC m=+1242.749894199" Feb 02 11:17:49 crc kubenswrapper[4925]: I0202 11:17:49.693528 4925 generic.go:334] "Generic (PLEG): container finished" podID="5826e7fc-d781-46da-a2dc-09baa99ab163" containerID="be348d929c12884d0f657b05b4c4543e50183126b26a9cfd25fb754a10960370" exitCode=0 Feb 02 11:17:49 crc kubenswrapper[4925]: I0202 11:17:49.693604 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xmpnn" event={"ID":"5826e7fc-d781-46da-a2dc-09baa99ab163","Type":"ContainerDied","Data":"be348d929c12884d0f657b05b4c4543e50183126b26a9cfd25fb754a10960370"} Feb 02 11:17:50 crc kubenswrapper[4925]: I0202 11:17:50.709315 4925 generic.go:334] "Generic (PLEG): container finished" podID="fab8fa26-070e-4295-bbf4-b44a994f2ba0" containerID="40c3bb36d0e8a5e72bafd41685ad46c3560463f3770b1fe47dd3fe03b210f492" exitCode=0 Feb 02 11:17:50 crc kubenswrapper[4925]: I0202 11:17:50.709379 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rclxc" event={"ID":"fab8fa26-070e-4295-bbf4-b44a994f2ba0","Type":"ContainerDied","Data":"40c3bb36d0e8a5e72bafd41685ad46c3560463f3770b1fe47dd3fe03b210f492"} Feb 02 11:17:50 crc kubenswrapper[4925]: I0202 11:17:50.710728 4925 generic.go:334] "Generic (PLEG): container finished" podID="53fefe30-82c4-4d41-9af2-c23e671ce91e" containerID="903ce1f065f896201bddf4ca3c2c292be4b924b8b44d523ce4c4182008d9503d" exitCode=0 Feb 02 11:17:50 crc kubenswrapper[4925]: I0202 11:17:50.710765 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rdb26" event={"ID":"53fefe30-82c4-4d41-9af2-c23e671ce91e","Type":"ContainerDied","Data":"903ce1f065f896201bddf4ca3c2c292be4b924b8b44d523ce4c4182008d9503d"} Feb 02 11:17:50 crc kubenswrapper[4925]: I0202 11:17:50.720651 4925 generic.go:334] "Generic (PLEG): container finished" podID="f03ccf8f-b9cc-4352-8c6b-7a5705807701" containerID="b0a56371d541f3801251455bc7b78eb139c75200719877c714f4ad75dc0ed4f5" exitCode=0 Feb 02 11:17:50 crc kubenswrapper[4925]: I0202 11:17:50.720727 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-da80-account-create-update-2hpp2" event={"ID":"f03ccf8f-b9cc-4352-8c6b-7a5705807701","Type":"ContainerDied","Data":"b0a56371d541f3801251455bc7b78eb139c75200719877c714f4ad75dc0ed4f5"} Feb 02 11:17:50 crc kubenswrapper[4925]: I0202 11:17:50.723354 4925 generic.go:334] "Generic (PLEG): container finished" podID="cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c" containerID="766960461a2916d9e4721732fa0d331e5609fb70f6a8265acc6b4e4afd702a34" exitCode=0 Feb 02 11:17:50 crc kubenswrapper[4925]: I0202 11:17:50.723418 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rx9rv" event={"ID":"cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c","Type":"ContainerDied","Data":"766960461a2916d9e4721732fa0d331e5609fb70f6a8265acc6b4e4afd702a34"} Feb 02 11:17:50 crc kubenswrapper[4925]: I0202 11:17:50.729130 4925 generic.go:334] "Generic (PLEG): container finished" podID="c0cf93f3-efad-4435-820a-f9b4631c1efd" containerID="451dd8f6eec86e08c08cfb9a9a4dbadb1ba68945cf87ce057b9d8137ee776837" exitCode=0 Feb 02 11:17:50 crc kubenswrapper[4925]: I0202 11:17:50.729303 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b379-account-create-update-kqh5k" event={"ID":"c0cf93f3-efad-4435-820a-f9b4631c1efd","Type":"ContainerDied","Data":"451dd8f6eec86e08c08cfb9a9a4dbadb1ba68945cf87ce057b9d8137ee776837"} Feb 02 11:17:50 crc kubenswrapper[4925]: I0202 11:17:50.734815 4925 generic.go:334] "Generic (PLEG): container finished" podID="2cdfffad-8b9e-41d8-b2da-a72d965d36a0" containerID="b899db659ca08d16b61d5f09b3f8cc9e00e9e475976198674ce67baa0591334f" exitCode=0 Feb 02 11:17:50 crc kubenswrapper[4925]: I0202 11:17:50.734898 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c6e8-account-create-update-s5hwt" event={"ID":"2cdfffad-8b9e-41d8-b2da-a72d965d36a0","Type":"ContainerDied","Data":"b899db659ca08d16b61d5f09b3f8cc9e00e9e475976198674ce67baa0591334f"} Feb 02 11:17:52 crc kubenswrapper[4925]: I0202 11:17:52.960242 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c6e8-account-create-update-s5hwt" Feb 02 11:17:52 crc kubenswrapper[4925]: I0202 11:17:52.971870 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rclxc" Feb 02 11:17:52 crc kubenswrapper[4925]: I0202 11:17:52.976437 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b379-account-create-update-kqh5k" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.009846 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-da80-account-create-update-2hpp2" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.010996 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rdb26" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.030285 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xmpnn" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.034588 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rx9rv" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.037142 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cdfffad-8b9e-41d8-b2da-a72d965d36a0-operator-scripts\") pod \"2cdfffad-8b9e-41d8-b2da-a72d965d36a0\" (UID: \"2cdfffad-8b9e-41d8-b2da-a72d965d36a0\") " Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.037332 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53fefe30-82c4-4d41-9af2-c23e671ce91e-operator-scripts\") pod \"53fefe30-82c4-4d41-9af2-c23e671ce91e\" (UID: \"53fefe30-82c4-4d41-9af2-c23e671ce91e\") " Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.037353 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rjmp\" (UniqueName: \"kubernetes.io/projected/f03ccf8f-b9cc-4352-8c6b-7a5705807701-kube-api-access-6rjmp\") pod \"f03ccf8f-b9cc-4352-8c6b-7a5705807701\" (UID: \"f03ccf8f-b9cc-4352-8c6b-7a5705807701\") " Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.037379 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6bcr\" (UniqueName: \"kubernetes.io/projected/fab8fa26-070e-4295-bbf4-b44a994f2ba0-kube-api-access-b6bcr\") pod \"fab8fa26-070e-4295-bbf4-b44a994f2ba0\" (UID: \"fab8fa26-070e-4295-bbf4-b44a994f2ba0\") " Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.037403 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ghjw\" (UniqueName: \"kubernetes.io/projected/c0cf93f3-efad-4435-820a-f9b4631c1efd-kube-api-access-8ghjw\") pod \"c0cf93f3-efad-4435-820a-f9b4631c1efd\" (UID: \"c0cf93f3-efad-4435-820a-f9b4631c1efd\") " Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.037451 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf8z8\" (UniqueName: \"kubernetes.io/projected/2cdfffad-8b9e-41d8-b2da-a72d965d36a0-kube-api-access-lf8z8\") pod \"2cdfffad-8b9e-41d8-b2da-a72d965d36a0\" (UID: \"2cdfffad-8b9e-41d8-b2da-a72d965d36a0\") " Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.037481 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f03ccf8f-b9cc-4352-8c6b-7a5705807701-operator-scripts\") pod \"f03ccf8f-b9cc-4352-8c6b-7a5705807701\" (UID: \"f03ccf8f-b9cc-4352-8c6b-7a5705807701\") " Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.037553 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fab8fa26-070e-4295-bbf4-b44a994f2ba0-operator-scripts\") pod \"fab8fa26-070e-4295-bbf4-b44a994f2ba0\" (UID: \"fab8fa26-070e-4295-bbf4-b44a994f2ba0\") " Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.037593 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qb2j\" (UniqueName: \"kubernetes.io/projected/53fefe30-82c4-4d41-9af2-c23e671ce91e-kube-api-access-9qb2j\") pod \"53fefe30-82c4-4d41-9af2-c23e671ce91e\" (UID: \"53fefe30-82c4-4d41-9af2-c23e671ce91e\") " Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.037633 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0cf93f3-efad-4435-820a-f9b4631c1efd-operator-scripts\") pod \"c0cf93f3-efad-4435-820a-f9b4631c1efd\" (UID: \"c0cf93f3-efad-4435-820a-f9b4631c1efd\") " Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.038880 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cdfffad-8b9e-41d8-b2da-a72d965d36a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2cdfffad-8b9e-41d8-b2da-a72d965d36a0" (UID: "2cdfffad-8b9e-41d8-b2da-a72d965d36a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.038998 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53fefe30-82c4-4d41-9af2-c23e671ce91e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53fefe30-82c4-4d41-9af2-c23e671ce91e" (UID: "53fefe30-82c4-4d41-9af2-c23e671ce91e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.039473 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fab8fa26-070e-4295-bbf4-b44a994f2ba0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fab8fa26-070e-4295-bbf4-b44a994f2ba0" (UID: "fab8fa26-070e-4295-bbf4-b44a994f2ba0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.039828 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f03ccf8f-b9cc-4352-8c6b-7a5705807701-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f03ccf8f-b9cc-4352-8c6b-7a5705807701" (UID: "f03ccf8f-b9cc-4352-8c6b-7a5705807701"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.039983 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0cf93f3-efad-4435-820a-f9b4631c1efd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0cf93f3-efad-4435-820a-f9b4631c1efd" (UID: "c0cf93f3-efad-4435-820a-f9b4631c1efd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.046352 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cdfffad-8b9e-41d8-b2da-a72d965d36a0-kube-api-access-lf8z8" (OuterVolumeSpecName: "kube-api-access-lf8z8") pod "2cdfffad-8b9e-41d8-b2da-a72d965d36a0" (UID: "2cdfffad-8b9e-41d8-b2da-a72d965d36a0"). InnerVolumeSpecName "kube-api-access-lf8z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.046416 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fab8fa26-070e-4295-bbf4-b44a994f2ba0-kube-api-access-b6bcr" (OuterVolumeSpecName: "kube-api-access-b6bcr") pod "fab8fa26-070e-4295-bbf4-b44a994f2ba0" (UID: "fab8fa26-070e-4295-bbf4-b44a994f2ba0"). InnerVolumeSpecName "kube-api-access-b6bcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.047182 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0cf93f3-efad-4435-820a-f9b4631c1efd-kube-api-access-8ghjw" (OuterVolumeSpecName: "kube-api-access-8ghjw") pod "c0cf93f3-efad-4435-820a-f9b4631c1efd" (UID: "c0cf93f3-efad-4435-820a-f9b4631c1efd"). InnerVolumeSpecName "kube-api-access-8ghjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.049606 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53fefe30-82c4-4d41-9af2-c23e671ce91e-kube-api-access-9qb2j" (OuterVolumeSpecName: "kube-api-access-9qb2j") pod "53fefe30-82c4-4d41-9af2-c23e671ce91e" (UID: "53fefe30-82c4-4d41-9af2-c23e671ce91e"). InnerVolumeSpecName "kube-api-access-9qb2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.051274 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f03ccf8f-b9cc-4352-8c6b-7a5705807701-kube-api-access-6rjmp" (OuterVolumeSpecName: "kube-api-access-6rjmp") pod "f03ccf8f-b9cc-4352-8c6b-7a5705807701" (UID: "f03ccf8f-b9cc-4352-8c6b-7a5705807701"). InnerVolumeSpecName "kube-api-access-6rjmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.139492 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6n62\" (UniqueName: \"kubernetes.io/projected/cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c-kube-api-access-s6n62\") pod \"cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c\" (UID: \"cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c\") " Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.139577 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xpqh\" (UniqueName: \"kubernetes.io/projected/5826e7fc-d781-46da-a2dc-09baa99ab163-kube-api-access-2xpqh\") pod \"5826e7fc-d781-46da-a2dc-09baa99ab163\" (UID: \"5826e7fc-d781-46da-a2dc-09baa99ab163\") " Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.139651 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c-operator-scripts\") pod \"cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c\" (UID: \"cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c\") " Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.139701 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5826e7fc-d781-46da-a2dc-09baa99ab163-operator-scripts\") pod \"5826e7fc-d781-46da-a2dc-09baa99ab163\" (UID: \"5826e7fc-d781-46da-a2dc-09baa99ab163\") " Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.140046 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf8z8\" (UniqueName: \"kubernetes.io/projected/2cdfffad-8b9e-41d8-b2da-a72d965d36a0-kube-api-access-lf8z8\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.140091 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f03ccf8f-b9cc-4352-8c6b-7a5705807701-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.140109 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fab8fa26-070e-4295-bbf4-b44a994f2ba0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.140127 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qb2j\" (UniqueName: \"kubernetes.io/projected/53fefe30-82c4-4d41-9af2-c23e671ce91e-kube-api-access-9qb2j\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.140141 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0cf93f3-efad-4435-820a-f9b4631c1efd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.140197 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cdfffad-8b9e-41d8-b2da-a72d965d36a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.140211 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53fefe30-82c4-4d41-9af2-c23e671ce91e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.140220 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rjmp\" (UniqueName: \"kubernetes.io/projected/f03ccf8f-b9cc-4352-8c6b-7a5705807701-kube-api-access-6rjmp\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.140229 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6bcr\" (UniqueName: \"kubernetes.io/projected/fab8fa26-070e-4295-bbf4-b44a994f2ba0-kube-api-access-b6bcr\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.140238 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ghjw\" (UniqueName: \"kubernetes.io/projected/c0cf93f3-efad-4435-820a-f9b4631c1efd-kube-api-access-8ghjw\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.140379 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c" (UID: "cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.140530 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5826e7fc-d781-46da-a2dc-09baa99ab163-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5826e7fc-d781-46da-a2dc-09baa99ab163" (UID: "5826e7fc-d781-46da-a2dc-09baa99ab163"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.143686 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5826e7fc-d781-46da-a2dc-09baa99ab163-kube-api-access-2xpqh" (OuterVolumeSpecName: "kube-api-access-2xpqh") pod "5826e7fc-d781-46da-a2dc-09baa99ab163" (UID: "5826e7fc-d781-46da-a2dc-09baa99ab163"). InnerVolumeSpecName "kube-api-access-2xpqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.146919 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c-kube-api-access-s6n62" (OuterVolumeSpecName: "kube-api-access-s6n62") pod "cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c" (UID: "cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c"). InnerVolumeSpecName "kube-api-access-s6n62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.241811 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6n62\" (UniqueName: \"kubernetes.io/projected/cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c-kube-api-access-s6n62\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.241848 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xpqh\" (UniqueName: \"kubernetes.io/projected/5826e7fc-d781-46da-a2dc-09baa99ab163-kube-api-access-2xpqh\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.241861 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.241873 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5826e7fc-d781-46da-a2dc-09baa99ab163-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.763850 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b379-account-create-update-kqh5k" event={"ID":"c0cf93f3-efad-4435-820a-f9b4631c1efd","Type":"ContainerDied","Data":"65a5e831fab4cf6866ea595b8360ac9f901cfd4c4b59eca83b690261e9b393c8"} Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.764237 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65a5e831fab4cf6866ea595b8360ac9f901cfd4c4b59eca83b690261e9b393c8" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.764356 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b379-account-create-update-kqh5k" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.767010 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c6e8-account-create-update-s5hwt" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.766999 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c6e8-account-create-update-s5hwt" event={"ID":"2cdfffad-8b9e-41d8-b2da-a72d965d36a0","Type":"ContainerDied","Data":"61cfa41dc257847726d2cf9188783a1ab624579beb092e5b048d6918971a6f28"} Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.767223 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61cfa41dc257847726d2cf9188783a1ab624579beb092e5b048d6918971a6f28" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.769407 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rclxc" event={"ID":"fab8fa26-070e-4295-bbf4-b44a994f2ba0","Type":"ContainerDied","Data":"8ad05949fc83bbb69d365673031dee300615a8b71a9292fd3bdd7a3289653e94"} Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.769457 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ad05949fc83bbb69d365673031dee300615a8b71a9292fd3bdd7a3289653e94" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.769559 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rclxc" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.775946 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rdb26" event={"ID":"53fefe30-82c4-4d41-9af2-c23e671ce91e","Type":"ContainerDied","Data":"e6dc0f66ba82a443cb04a96b29c1bcdf5998fae6bc3e9d165288fd6da99fef6f"} Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.775985 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6dc0f66ba82a443cb04a96b29c1bcdf5998fae6bc3e9d165288fd6da99fef6f" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.776126 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rdb26" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.781618 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-da80-account-create-update-2hpp2" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.781651 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-da80-account-create-update-2hpp2" event={"ID":"f03ccf8f-b9cc-4352-8c6b-7a5705807701","Type":"ContainerDied","Data":"19b4f760e9a8f2d8e753338e520fd65e88cdb7ff1fb47621d06e8ed7a8c2a17e"} Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.781705 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b4f760e9a8f2d8e753338e520fd65e88cdb7ff1fb47621d06e8ed7a8c2a17e" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.784014 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rx9rv" event={"ID":"cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c","Type":"ContainerDied","Data":"f156b0f1e2f498dbc2c2ba1773d65ca0ab88ddd2ff69f81486dcb0a2745175a5"} Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.784194 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f156b0f1e2f498dbc2c2ba1773d65ca0ab88ddd2ff69f81486dcb0a2745175a5" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.784043 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rx9rv" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.789216 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xmpnn" event={"ID":"5826e7fc-d781-46da-a2dc-09baa99ab163","Type":"ContainerDied","Data":"304b3f725a4a768e30ce580b681f14b9931b681b9e0cc713a1d5e2ea2564fcda"} Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.789268 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="304b3f725a4a768e30ce580b681f14b9931b681b9e0cc713a1d5e2ea2564fcda" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.789358 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xmpnn" Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.791881 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rvhtj" event={"ID":"de3b7878-e41d-4ea8-b866-f34a48455d29","Type":"ContainerStarted","Data":"88d9e3c632b7c12348d3057723c79346bf68e8dc1a16d12b97e97c729a28160b"} Feb 02 11:17:53 crc kubenswrapper[4925]: I0202 11:17:53.813370 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-rvhtj" podStartSLOduration=16.778732364 podStartE2EDuration="26.813344744s" podCreationTimestamp="2026-02-02 11:17:27 +0000 UTC" firstStartedPulling="2026-02-02 11:17:43.143830508 +0000 UTC m=+1240.148079470" lastFinishedPulling="2026-02-02 11:17:53.178442888 +0000 UTC m=+1250.182691850" observedRunningTime="2026-02-02 11:17:53.809213223 +0000 UTC m=+1250.813462205" watchObservedRunningTime="2026-02-02 11:17:53.813344744 +0000 UTC m=+1250.817593716" Feb 02 11:18:01 crc kubenswrapper[4925]: I0202 11:18:01.852972 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xtzng" event={"ID":"66bbba42-9e45-446e-8042-a428a6269d08","Type":"ContainerStarted","Data":"904e9cf8bf2d427ca1214200e077aff4370afbced0959f9f89e43ff54f630981"} Feb 02 11:18:01 crc kubenswrapper[4925]: I0202 11:18:01.874543 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-xtzng" podStartSLOduration=2.213926298 podStartE2EDuration="41.874523005s" podCreationTimestamp="2026-02-02 11:17:20 +0000 UTC" firstStartedPulling="2026-02-02 11:17:21.173700872 +0000 UTC m=+1218.177949844" lastFinishedPulling="2026-02-02 11:18:00.834297589 +0000 UTC m=+1257.838546551" observedRunningTime="2026-02-02 11:18:01.867533788 +0000 UTC m=+1258.871782740" watchObservedRunningTime="2026-02-02 11:18:01.874523005 +0000 UTC m=+1258.878771967" Feb 02 11:18:02 crc kubenswrapper[4925]: E0202 11:18:02.663288 4925 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde3b7878_e41d_4ea8_b866_f34a48455d29.slice/crio-88d9e3c632b7c12348d3057723c79346bf68e8dc1a16d12b97e97c729a28160b.scope\": RecentStats: unable to find data in memory cache]" Feb 02 11:18:02 crc kubenswrapper[4925]: I0202 11:18:02.863385 4925 generic.go:334] "Generic (PLEG): container finished" podID="de3b7878-e41d-4ea8-b866-f34a48455d29" containerID="88d9e3c632b7c12348d3057723c79346bf68e8dc1a16d12b97e97c729a28160b" exitCode=0 Feb 02 11:18:02 crc kubenswrapper[4925]: I0202 11:18:02.863436 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rvhtj" event={"ID":"de3b7878-e41d-4ea8-b866-f34a48455d29","Type":"ContainerDied","Data":"88d9e3c632b7c12348d3057723c79346bf68e8dc1a16d12b97e97c729a28160b"} Feb 02 11:18:04 crc kubenswrapper[4925]: I0202 11:18:04.217793 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rvhtj" Feb 02 11:18:04 crc kubenswrapper[4925]: I0202 11:18:04.325131 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3b7878-e41d-4ea8-b866-f34a48455d29-combined-ca-bundle\") pod \"de3b7878-e41d-4ea8-b866-f34a48455d29\" (UID: \"de3b7878-e41d-4ea8-b866-f34a48455d29\") " Feb 02 11:18:04 crc kubenswrapper[4925]: I0202 11:18:04.325187 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3b7878-e41d-4ea8-b866-f34a48455d29-config-data\") pod \"de3b7878-e41d-4ea8-b866-f34a48455d29\" (UID: \"de3b7878-e41d-4ea8-b866-f34a48455d29\") " Feb 02 11:18:04 crc kubenswrapper[4925]: I0202 11:18:04.325283 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q5tt\" (UniqueName: \"kubernetes.io/projected/de3b7878-e41d-4ea8-b866-f34a48455d29-kube-api-access-9q5tt\") pod \"de3b7878-e41d-4ea8-b866-f34a48455d29\" (UID: \"de3b7878-e41d-4ea8-b866-f34a48455d29\") " Feb 02 11:18:04 crc kubenswrapper[4925]: I0202 11:18:04.332047 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de3b7878-e41d-4ea8-b866-f34a48455d29-kube-api-access-9q5tt" (OuterVolumeSpecName: "kube-api-access-9q5tt") pod "de3b7878-e41d-4ea8-b866-f34a48455d29" (UID: "de3b7878-e41d-4ea8-b866-f34a48455d29"). InnerVolumeSpecName "kube-api-access-9q5tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:18:04 crc kubenswrapper[4925]: I0202 11:18:04.352868 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de3b7878-e41d-4ea8-b866-f34a48455d29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de3b7878-e41d-4ea8-b866-f34a48455d29" (UID: "de3b7878-e41d-4ea8-b866-f34a48455d29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:04 crc kubenswrapper[4925]: I0202 11:18:04.368029 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de3b7878-e41d-4ea8-b866-f34a48455d29-config-data" (OuterVolumeSpecName: "config-data") pod "de3b7878-e41d-4ea8-b866-f34a48455d29" (UID: "de3b7878-e41d-4ea8-b866-f34a48455d29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:04 crc kubenswrapper[4925]: I0202 11:18:04.426984 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q5tt\" (UniqueName: \"kubernetes.io/projected/de3b7878-e41d-4ea8-b866-f34a48455d29-kube-api-access-9q5tt\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:04 crc kubenswrapper[4925]: I0202 11:18:04.427030 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3b7878-e41d-4ea8-b866-f34a48455d29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:04 crc kubenswrapper[4925]: I0202 11:18:04.427051 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3b7878-e41d-4ea8-b866-f34a48455d29-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:04 crc kubenswrapper[4925]: I0202 11:18:04.877276 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rvhtj" event={"ID":"de3b7878-e41d-4ea8-b866-f34a48455d29","Type":"ContainerDied","Data":"45c3faf62fa9d26f5ea3142c09d021953eb4a7e788e9d6cc2f4a9ec6790063ad"} Feb 02 11:18:04 crc kubenswrapper[4925]: I0202 11:18:04.877574 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45c3faf62fa9d26f5ea3142c09d021953eb4a7e788e9d6cc2f4a9ec6790063ad" Feb 02 11:18:04 crc kubenswrapper[4925]: I0202 11:18:04.877635 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rvhtj" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.149627 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-t7zfh"] Feb 02 11:18:05 crc kubenswrapper[4925]: E0202 11:18:05.149970 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03ccf8f-b9cc-4352-8c6b-7a5705807701" containerName="mariadb-account-create-update" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.149991 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03ccf8f-b9cc-4352-8c6b-7a5705807701" containerName="mariadb-account-create-update" Feb 02 11:18:05 crc kubenswrapper[4925]: E0202 11:18:05.150002 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c" containerName="mariadb-database-create" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.150009 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c" containerName="mariadb-database-create" Feb 02 11:18:05 crc kubenswrapper[4925]: E0202 11:18:05.150022 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5826e7fc-d781-46da-a2dc-09baa99ab163" containerName="mariadb-account-create-update" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.150028 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="5826e7fc-d781-46da-a2dc-09baa99ab163" containerName="mariadb-account-create-update" Feb 02 11:18:05 crc kubenswrapper[4925]: E0202 11:18:05.150042 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0cf93f3-efad-4435-820a-f9b4631c1efd" containerName="mariadb-account-create-update" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.150049 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0cf93f3-efad-4435-820a-f9b4631c1efd" containerName="mariadb-account-create-update" Feb 02 11:18:05 crc kubenswrapper[4925]: E0202 11:18:05.150063 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3b7878-e41d-4ea8-b866-f34a48455d29" containerName="keystone-db-sync" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.150090 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3b7878-e41d-4ea8-b866-f34a48455d29" containerName="keystone-db-sync" Feb 02 11:18:05 crc kubenswrapper[4925]: E0202 11:18:05.150102 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab8fa26-070e-4295-bbf4-b44a994f2ba0" containerName="mariadb-database-create" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.150107 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab8fa26-070e-4295-bbf4-b44a994f2ba0" containerName="mariadb-database-create" Feb 02 11:18:05 crc kubenswrapper[4925]: E0202 11:18:05.150117 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53fefe30-82c4-4d41-9af2-c23e671ce91e" containerName="mariadb-database-create" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.150123 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="53fefe30-82c4-4d41-9af2-c23e671ce91e" containerName="mariadb-database-create" Feb 02 11:18:05 crc kubenswrapper[4925]: E0202 11:18:05.150138 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cdfffad-8b9e-41d8-b2da-a72d965d36a0" containerName="mariadb-account-create-update" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.150147 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cdfffad-8b9e-41d8-b2da-a72d965d36a0" containerName="mariadb-account-create-update" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.150339 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3b7878-e41d-4ea8-b866-f34a48455d29" containerName="keystone-db-sync" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.150359 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="53fefe30-82c4-4d41-9af2-c23e671ce91e" containerName="mariadb-database-create" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.150374 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0cf93f3-efad-4435-820a-f9b4631c1efd" containerName="mariadb-account-create-update" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.150386 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03ccf8f-b9cc-4352-8c6b-7a5705807701" containerName="mariadb-account-create-update" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.150398 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c" containerName="mariadb-database-create" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.150414 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="fab8fa26-070e-4295-bbf4-b44a994f2ba0" containerName="mariadb-database-create" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.150427 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="5826e7fc-d781-46da-a2dc-09baa99ab163" containerName="mariadb-account-create-update" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.150436 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cdfffad-8b9e-41d8-b2da-a72d965d36a0" containerName="mariadb-account-create-update" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.162461 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-t7zfh"] Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.162598 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.181722 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rjbr7"] Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.182656 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.184811 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.186423 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wkg8f" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.189425 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.190316 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.193658 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.195384 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rjbr7"] Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.245972 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-t7zfh\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.246035 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-combined-ca-bundle\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.246058 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-t7zfh\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.246110 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-scripts\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.246140 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-credential-keys\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.246164 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-config\") pod \"dnsmasq-dns-75bb4695fc-t7zfh\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.246218 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pls6h\" (UniqueName: \"kubernetes.io/projected/79e6de27-1a35-4fc7-8fc9-ca063454daa2-kube-api-access-pls6h\") pod \"dnsmasq-dns-75bb4695fc-t7zfh\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.246236 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-config-data\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.246250 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-fernet-keys\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.246264 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lhx8\" (UniqueName: \"kubernetes.io/projected/60beee8c-66a8-426f-a911-91d108e9f12d-kube-api-access-9lhx8\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.246279 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-t7zfh\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.342056 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-d8tqm"] Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.343198 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.344884 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-dx7sq" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.345675 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.346014 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.347383 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-t7zfh\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.347427 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-combined-ca-bundle\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.347449 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-t7zfh\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.347473 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-scripts\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.347503 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-credential-keys\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.347528 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-config\") pod \"dnsmasq-dns-75bb4695fc-t7zfh\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.347570 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pls6h\" (UniqueName: \"kubernetes.io/projected/79e6de27-1a35-4fc7-8fc9-ca063454daa2-kube-api-access-pls6h\") pod \"dnsmasq-dns-75bb4695fc-t7zfh\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.347586 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-config-data\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.347601 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-fernet-keys\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.347615 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lhx8\" (UniqueName: \"kubernetes.io/projected/60beee8c-66a8-426f-a911-91d108e9f12d-kube-api-access-9lhx8\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.347629 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-t7zfh\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.348488 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-t7zfh\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.348981 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-t7zfh\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.349962 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-config\") pod \"dnsmasq-dns-75bb4695fc-t7zfh\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.350580 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-t7zfh\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.359279 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9cjl8"] Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.360381 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9cjl8" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.361250 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-config-data\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.361869 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-combined-ca-bundle\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.362233 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-credential-keys\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.364238 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.364344 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vdmk7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.364454 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.364541 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-fernet-keys\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.364697 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-scripts\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.389737 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lhx8\" (UniqueName: \"kubernetes.io/projected/60beee8c-66a8-426f-a911-91d108e9f12d-kube-api-access-9lhx8\") pod \"keystone-bootstrap-rjbr7\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.403564 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9cjl8"] Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.426692 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pls6h\" (UniqueName: \"kubernetes.io/projected/79e6de27-1a35-4fc7-8fc9-ca063454daa2-kube-api-access-pls6h\") pod \"dnsmasq-dns-75bb4695fc-t7zfh\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.454688 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smtqs\" (UniqueName: \"kubernetes.io/projected/e9ab37c5-8a76-48f2-ade7-92735dc062c4-kube-api-access-smtqs\") pod \"neutron-db-sync-9cjl8\" (UID: \"e9ab37c5-8a76-48f2-ade7-92735dc062c4\") " pod="openstack/neutron-db-sync-9cjl8" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.454771 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ab37c5-8a76-48f2-ade7-92735dc062c4-combined-ca-bundle\") pod \"neutron-db-sync-9cjl8\" (UID: \"e9ab37c5-8a76-48f2-ade7-92735dc062c4\") " pod="openstack/neutron-db-sync-9cjl8" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.454801 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-etc-machine-id\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.454821 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-db-sync-config-data\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.454840 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-scripts\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.454872 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hxzm\" (UniqueName: \"kubernetes.io/projected/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-kube-api-access-6hxzm\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.454887 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9ab37c5-8a76-48f2-ade7-92735dc062c4-config\") pod \"neutron-db-sync-9cjl8\" (UID: \"e9ab37c5-8a76-48f2-ade7-92735dc062c4\") " pod="openstack/neutron-db-sync-9cjl8" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.454905 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-combined-ca-bundle\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.454937 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-config-data\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.471730 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.473613 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.485685 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.485782 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.489525 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.498711 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.505721 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-d8tqm"] Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.506876 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.526862 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-gz8kp"] Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.528006 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gz8kp" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.534462 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tcgcl" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.534744 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.551148 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-gz8kp"] Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.568803 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hxzm\" (UniqueName: \"kubernetes.io/projected/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-kube-api-access-6hxzm\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.568869 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9ab37c5-8a76-48f2-ade7-92735dc062c4-config\") pod \"neutron-db-sync-9cjl8\" (UID: \"e9ab37c5-8a76-48f2-ade7-92735dc062c4\") " pod="openstack/neutron-db-sync-9cjl8" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.568900 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-combined-ca-bundle\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.568967 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-config-data\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.568989 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smtqs\" (UniqueName: \"kubernetes.io/projected/e9ab37c5-8a76-48f2-ade7-92735dc062c4-kube-api-access-smtqs\") pod \"neutron-db-sync-9cjl8\" (UID: \"e9ab37c5-8a76-48f2-ade7-92735dc062c4\") " pod="openstack/neutron-db-sync-9cjl8" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.569095 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ab37c5-8a76-48f2-ade7-92735dc062c4-combined-ca-bundle\") pod \"neutron-db-sync-9cjl8\" (UID: \"e9ab37c5-8a76-48f2-ade7-92735dc062c4\") " pod="openstack/neutron-db-sync-9cjl8" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.569136 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-etc-machine-id\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.569176 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-db-sync-config-data\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.569198 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-scripts\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.574506 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-etc-machine-id\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.581981 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-scripts\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.593645 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9ab37c5-8a76-48f2-ade7-92735dc062c4-config\") pod \"neutron-db-sync-9cjl8\" (UID: \"e9ab37c5-8a76-48f2-ade7-92735dc062c4\") " pod="openstack/neutron-db-sync-9cjl8" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.596710 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-combined-ca-bundle\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.627843 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ab37c5-8a76-48f2-ade7-92735dc062c4-combined-ca-bundle\") pod \"neutron-db-sync-9cjl8\" (UID: \"e9ab37c5-8a76-48f2-ade7-92735dc062c4\") " pod="openstack/neutron-db-sync-9cjl8" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.628035 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-db-sync-config-data\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.628268 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-config-data\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.645528 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hxzm\" (UniqueName: \"kubernetes.io/projected/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-kube-api-access-6hxzm\") pod \"cinder-db-sync-d8tqm\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.653046 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smtqs\" (UniqueName: \"kubernetes.io/projected/e9ab37c5-8a76-48f2-ade7-92735dc062c4-kube-api-access-smtqs\") pod \"neutron-db-sync-9cjl8\" (UID: \"e9ab37c5-8a76-48f2-ade7-92735dc062c4\") " pod="openstack/neutron-db-sync-9cjl8" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.670224 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f0a3f9-6761-457e-aeda-efcf1d326211-run-httpd\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.670276 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f0a3f9-6761-457e-aeda-efcf1d326211-log-httpd\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.670297 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-scripts\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.670311 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-config-data\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.670336 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx9wg\" (UniqueName: \"kubernetes.io/projected/15e0ab2c-a590-4b39-af8b-a055a29f01c0-kube-api-access-dx9wg\") pod \"barbican-db-sync-gz8kp\" (UID: \"15e0ab2c-a590-4b39-af8b-a055a29f01c0\") " pod="openstack/barbican-db-sync-gz8kp" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.670353 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.670377 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e0ab2c-a590-4b39-af8b-a055a29f01c0-combined-ca-bundle\") pod \"barbican-db-sync-gz8kp\" (UID: \"15e0ab2c-a590-4b39-af8b-a055a29f01c0\") " pod="openstack/barbican-db-sync-gz8kp" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.670393 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/15e0ab2c-a590-4b39-af8b-a055a29f01c0-db-sync-config-data\") pod \"barbican-db-sync-gz8kp\" (UID: \"15e0ab2c-a590-4b39-af8b-a055a29f01c0\") " pod="openstack/barbican-db-sync-gz8kp" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.670455 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.670490 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfx92\" (UniqueName: \"kubernetes.io/projected/06f0a3f9-6761-457e-aeda-efcf1d326211-kube-api-access-dfx92\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.746847 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-t7zfh"] Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.762180 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-xxnvr"] Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.763897 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xxnvr" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.767568 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.770282 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-k25b6" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.770507 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.771400 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-pxs8x"] Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.771923 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f0a3f9-6761-457e-aeda-efcf1d326211-log-httpd\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.771959 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-scripts\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.771977 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-config-data\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.772003 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx9wg\" (UniqueName: \"kubernetes.io/projected/15e0ab2c-a590-4b39-af8b-a055a29f01c0-kube-api-access-dx9wg\") pod \"barbican-db-sync-gz8kp\" (UID: \"15e0ab2c-a590-4b39-af8b-a055a29f01c0\") " pod="openstack/barbican-db-sync-gz8kp" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.772028 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.772053 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e0ab2c-a590-4b39-af8b-a055a29f01c0-combined-ca-bundle\") pod \"barbican-db-sync-gz8kp\" (UID: \"15e0ab2c-a590-4b39-af8b-a055a29f01c0\") " pod="openstack/barbican-db-sync-gz8kp" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.772089 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/15e0ab2c-a590-4b39-af8b-a055a29f01c0-db-sync-config-data\") pod \"barbican-db-sync-gz8kp\" (UID: \"15e0ab2c-a590-4b39-af8b-a055a29f01c0\") " pod="openstack/barbican-db-sync-gz8kp" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.772157 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.772205 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfx92\" (UniqueName: \"kubernetes.io/projected/06f0a3f9-6761-457e-aeda-efcf1d326211-kube-api-access-dfx92\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.772253 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f0a3f9-6761-457e-aeda-efcf1d326211-run-httpd\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.772715 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f0a3f9-6761-457e-aeda-efcf1d326211-run-httpd\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.772966 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f0a3f9-6761-457e-aeda-efcf1d326211-log-httpd\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.773193 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.792628 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xxnvr"] Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.793021 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.798166 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-pxs8x"] Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.809098 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9cjl8" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.810835 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.821318 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-scripts\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.821318 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e0ab2c-a590-4b39-af8b-a055a29f01c0-combined-ca-bundle\") pod \"barbican-db-sync-gz8kp\" (UID: \"15e0ab2c-a590-4b39-af8b-a055a29f01c0\") " pod="openstack/barbican-db-sync-gz8kp" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.821317 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/15e0ab2c-a590-4b39-af8b-a055a29f01c0-db-sync-config-data\") pod \"barbican-db-sync-gz8kp\" (UID: \"15e0ab2c-a590-4b39-af8b-a055a29f01c0\") " pod="openstack/barbican-db-sync-gz8kp" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.821646 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.823630 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx9wg\" (UniqueName: \"kubernetes.io/projected/15e0ab2c-a590-4b39-af8b-a055a29f01c0-kube-api-access-dx9wg\") pod \"barbican-db-sync-gz8kp\" (UID: \"15e0ab2c-a590-4b39-af8b-a055a29f01c0\") " pod="openstack/barbican-db-sync-gz8kp" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.824238 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-config-data\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.826541 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfx92\" (UniqueName: \"kubernetes.io/projected/06f0a3f9-6761-457e-aeda-efcf1d326211-kube-api-access-dfx92\") pod \"ceilometer-0\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.827769 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.877429 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-pxs8x\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.877515 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-config\") pod \"dnsmasq-dns-745b9ddc8c-pxs8x\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.877547 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9qz5\" (UniqueName: \"kubernetes.io/projected/20d81564-431d-40c5-be81-3961fab3e8b8-kube-api-access-h9qz5\") pod \"placement-db-sync-xxnvr\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " pod="openstack/placement-db-sync-xxnvr" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.877570 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-config-data\") pod \"placement-db-sync-xxnvr\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " pod="openstack/placement-db-sync-xxnvr" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.877593 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-pxs8x\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.877627 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d81564-431d-40c5-be81-3961fab3e8b8-logs\") pod \"placement-db-sync-xxnvr\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " pod="openstack/placement-db-sync-xxnvr" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.886392 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-scripts\") pod \"placement-db-sync-xxnvr\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " pod="openstack/placement-db-sync-xxnvr" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.886488 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-pxs8x\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.886660 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-combined-ca-bundle\") pod \"placement-db-sync-xxnvr\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " pod="openstack/placement-db-sync-xxnvr" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.886715 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhql9\" (UniqueName: \"kubernetes.io/projected/abfcab41-3119-4ec5-94ac-32e949f0de93-kube-api-access-nhql9\") pod \"dnsmasq-dns-745b9ddc8c-pxs8x\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.972557 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gz8kp" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.989567 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-config-data\") pod \"placement-db-sync-xxnvr\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " pod="openstack/placement-db-sync-xxnvr" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.989647 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-pxs8x\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.989719 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d81564-431d-40c5-be81-3961fab3e8b8-logs\") pod \"placement-db-sync-xxnvr\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " pod="openstack/placement-db-sync-xxnvr" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.989747 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-scripts\") pod \"placement-db-sync-xxnvr\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " pod="openstack/placement-db-sync-xxnvr" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.989790 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-pxs8x\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.989861 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-combined-ca-bundle\") pod \"placement-db-sync-xxnvr\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " pod="openstack/placement-db-sync-xxnvr" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.989897 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhql9\" (UniqueName: \"kubernetes.io/projected/abfcab41-3119-4ec5-94ac-32e949f0de93-kube-api-access-nhql9\") pod \"dnsmasq-dns-745b9ddc8c-pxs8x\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.989960 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-pxs8x\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.990020 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-config\") pod \"dnsmasq-dns-745b9ddc8c-pxs8x\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.990101 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9qz5\" (UniqueName: \"kubernetes.io/projected/20d81564-431d-40c5-be81-3961fab3e8b8-kube-api-access-h9qz5\") pod \"placement-db-sync-xxnvr\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " pod="openstack/placement-db-sync-xxnvr" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.992545 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d81564-431d-40c5-be81-3961fab3e8b8-logs\") pod \"placement-db-sync-xxnvr\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " pod="openstack/placement-db-sync-xxnvr" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.992756 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-pxs8x\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.993862 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-pxs8x\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.994199 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-pxs8x\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.994603 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-config\") pod \"dnsmasq-dns-745b9ddc8c-pxs8x\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.998261 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-combined-ca-bundle\") pod \"placement-db-sync-xxnvr\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " pod="openstack/placement-db-sync-xxnvr" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.998268 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-config-data\") pod \"placement-db-sync-xxnvr\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " pod="openstack/placement-db-sync-xxnvr" Feb 02 11:18:05 crc kubenswrapper[4925]: I0202 11:18:05.998396 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-scripts\") pod \"placement-db-sync-xxnvr\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " pod="openstack/placement-db-sync-xxnvr" Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.012593 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhql9\" (UniqueName: \"kubernetes.io/projected/abfcab41-3119-4ec5-94ac-32e949f0de93-kube-api-access-nhql9\") pod \"dnsmasq-dns-745b9ddc8c-pxs8x\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.015542 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9qz5\" (UniqueName: \"kubernetes.io/projected/20d81564-431d-40c5-be81-3961fab3e8b8-kube-api-access-h9qz5\") pod \"placement-db-sync-xxnvr\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " pod="openstack/placement-db-sync-xxnvr" Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.149156 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xxnvr" Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.166547 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.204932 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-t7zfh"] Feb 02 11:18:06 crc kubenswrapper[4925]: W0202 11:18:06.217906 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79e6de27_1a35_4fc7_8fc9_ca063454daa2.slice/crio-012636f12588eb6c864f275043a667e6f101ceb46d6e00e58e52577eba989ff6 WatchSource:0}: Error finding container 012636f12588eb6c864f275043a667e6f101ceb46d6e00e58e52577eba989ff6: Status 404 returned error can't find the container with id 012636f12588eb6c864f275043a667e6f101ceb46d6e00e58e52577eba989ff6 Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.331462 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rjbr7"] Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.503918 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-d8tqm"] Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.833677 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9cjl8"] Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.877067 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.885029 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-gz8kp"] Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.923942 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-pxs8x"] Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.945679 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xxnvr"] Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.951251 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d8tqm" event={"ID":"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0","Type":"ContainerStarted","Data":"018be16b54d7ab34681f9d91c19d40a5645a9f82dbfc62ff97d016e6638e8e13"} Feb 02 11:18:06 crc kubenswrapper[4925]: W0202 11:18:06.954751 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20d81564_431d_40c5_be81_3961fab3e8b8.slice/crio-a792f477593bddbc74ca0ef3fe6ba2d064533ff7c172e35cd83548eefc9bbaec WatchSource:0}: Error finding container a792f477593bddbc74ca0ef3fe6ba2d064533ff7c172e35cd83548eefc9bbaec: Status 404 returned error can't find the container with id a792f477593bddbc74ca0ef3fe6ba2d064533ff7c172e35cd83548eefc9bbaec Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.955571 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f0a3f9-6761-457e-aeda-efcf1d326211","Type":"ContainerStarted","Data":"d72713ab9bd768609db8a49371e3d1a7411cd047a840e9763b4c14f41a0f0c4e"} Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.958257 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rjbr7" event={"ID":"60beee8c-66a8-426f-a911-91d108e9f12d","Type":"ContainerStarted","Data":"c1e3c213903d327eae1f1e851d8f7d8b15e026d760faad28aadb8224af8d7f12"} Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.959924 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" event={"ID":"abfcab41-3119-4ec5-94ac-32e949f0de93","Type":"ContainerStarted","Data":"0567b16946b87cfbb04a4342f2846877363059726a247875ed3dbbbf237454a0"} Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.962417 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9cjl8" event={"ID":"e9ab37c5-8a76-48f2-ade7-92735dc062c4","Type":"ContainerStarted","Data":"cb1e35a0b46084a07acc259727cc77e33fdf1dc1c7b6e1b231b8ab7603a48ccb"} Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.972410 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gz8kp" event={"ID":"15e0ab2c-a590-4b39-af8b-a055a29f01c0","Type":"ContainerStarted","Data":"e34e7de86a170015eaf4374f4a0e2cf889d3075cf5ca38aa18ad89c3c74cce13"} Feb 02 11:18:06 crc kubenswrapper[4925]: I0202 11:18:06.974958 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" event={"ID":"79e6de27-1a35-4fc7-8fc9-ca063454daa2","Type":"ContainerStarted","Data":"012636f12588eb6c864f275043a667e6f101ceb46d6e00e58e52577eba989ff6"} Feb 02 11:18:07 crc kubenswrapper[4925]: I0202 11:18:07.152654 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:18:07 crc kubenswrapper[4925]: I0202 11:18:07.994396 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" event={"ID":"abfcab41-3119-4ec5-94ac-32e949f0de93","Type":"ContainerStarted","Data":"a9fe731b2e8b135cc7a0c0ca37087e2a805e5d6a9210d60e3c1b28ab256f2cd8"} Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:07.998140 4925 generic.go:334] "Generic (PLEG): container finished" podID="79e6de27-1a35-4fc7-8fc9-ca063454daa2" containerID="0a8791fc9c9381c8923e265f0b981fbc0f3e5d8a85587b7eaa8d7d9b01555f82" exitCode=0 Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:07.998297 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" event={"ID":"79e6de27-1a35-4fc7-8fc9-ca063454daa2","Type":"ContainerDied","Data":"0a8791fc9c9381c8923e265f0b981fbc0f3e5d8a85587b7eaa8d7d9b01555f82"} Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.004298 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xxnvr" event={"ID":"20d81564-431d-40c5-be81-3961fab3e8b8","Type":"ContainerStarted","Data":"a792f477593bddbc74ca0ef3fe6ba2d064533ff7c172e35cd83548eefc9bbaec"} Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.006928 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rjbr7" event={"ID":"60beee8c-66a8-426f-a911-91d108e9f12d","Type":"ContainerStarted","Data":"04bf78182aa9ebc59472bda9dd173eb78f57f80908346051af91c2ca8274c61a"} Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.010302 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9cjl8" event={"ID":"e9ab37c5-8a76-48f2-ade7-92735dc062c4","Type":"ContainerStarted","Data":"cf3a517b2cd796ca28b1a9fe64e8cb308993e7a69c4c0bec2db79a6d11eeb1c0"} Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.048508 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rjbr7" podStartSLOduration=3.048487358 podStartE2EDuration="3.048487358s" podCreationTimestamp="2026-02-02 11:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:18:08.041705008 +0000 UTC m=+1265.045953970" watchObservedRunningTime="2026-02-02 11:18:08.048487358 +0000 UTC m=+1265.052736320" Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.372887 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.445623 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-dns-svc\") pod \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.445673 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-ovsdbserver-sb\") pod \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.445767 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-ovsdbserver-nb\") pod \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.445788 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-config\") pod \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.445954 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pls6h\" (UniqueName: \"kubernetes.io/projected/79e6de27-1a35-4fc7-8fc9-ca063454daa2-kube-api-access-pls6h\") pod \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\" (UID: \"79e6de27-1a35-4fc7-8fc9-ca063454daa2\") " Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.452416 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e6de27-1a35-4fc7-8fc9-ca063454daa2-kube-api-access-pls6h" (OuterVolumeSpecName: "kube-api-access-pls6h") pod "79e6de27-1a35-4fc7-8fc9-ca063454daa2" (UID: "79e6de27-1a35-4fc7-8fc9-ca063454daa2"). InnerVolumeSpecName "kube-api-access-pls6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.470952 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "79e6de27-1a35-4fc7-8fc9-ca063454daa2" (UID: "79e6de27-1a35-4fc7-8fc9-ca063454daa2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.472601 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "79e6de27-1a35-4fc7-8fc9-ca063454daa2" (UID: "79e6de27-1a35-4fc7-8fc9-ca063454daa2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.472933 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-config" (OuterVolumeSpecName: "config") pod "79e6de27-1a35-4fc7-8fc9-ca063454daa2" (UID: "79e6de27-1a35-4fc7-8fc9-ca063454daa2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.480289 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79e6de27-1a35-4fc7-8fc9-ca063454daa2" (UID: "79e6de27-1a35-4fc7-8fc9-ca063454daa2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.549195 4925 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.549230 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.549241 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.549249 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e6de27-1a35-4fc7-8fc9-ca063454daa2-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:08 crc kubenswrapper[4925]: I0202 11:18:08.549258 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pls6h\" (UniqueName: \"kubernetes.io/projected/79e6de27-1a35-4fc7-8fc9-ca063454daa2-kube-api-access-pls6h\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:09 crc kubenswrapper[4925]: I0202 11:18:09.023837 4925 generic.go:334] "Generic (PLEG): container finished" podID="abfcab41-3119-4ec5-94ac-32e949f0de93" containerID="a9fe731b2e8b135cc7a0c0ca37087e2a805e5d6a9210d60e3c1b28ab256f2cd8" exitCode=0 Feb 02 11:18:09 crc kubenswrapper[4925]: I0202 11:18:09.023988 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" event={"ID":"abfcab41-3119-4ec5-94ac-32e949f0de93","Type":"ContainerDied","Data":"a9fe731b2e8b135cc7a0c0ca37087e2a805e5d6a9210d60e3c1b28ab256f2cd8"} Feb 02 11:18:09 crc kubenswrapper[4925]: I0202 11:18:09.030392 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" event={"ID":"79e6de27-1a35-4fc7-8fc9-ca063454daa2","Type":"ContainerDied","Data":"012636f12588eb6c864f275043a667e6f101ceb46d6e00e58e52577eba989ff6"} Feb 02 11:18:09 crc kubenswrapper[4925]: I0202 11:18:09.030486 4925 scope.go:117] "RemoveContainer" containerID="0a8791fc9c9381c8923e265f0b981fbc0f3e5d8a85587b7eaa8d7d9b01555f82" Feb 02 11:18:09 crc kubenswrapper[4925]: I0202 11:18:09.030530 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-t7zfh" Feb 02 11:18:09 crc kubenswrapper[4925]: I0202 11:18:09.121269 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-t7zfh"] Feb 02 11:18:09 crc kubenswrapper[4925]: I0202 11:18:09.137363 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-t7zfh"] Feb 02 11:18:09 crc kubenswrapper[4925]: I0202 11:18:09.164835 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9cjl8" podStartSLOduration=4.164809494 podStartE2EDuration="4.164809494s" podCreationTimestamp="2026-02-02 11:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:18:09.119302371 +0000 UTC m=+1266.123551333" watchObservedRunningTime="2026-02-02 11:18:09.164809494 +0000 UTC m=+1266.169058456" Feb 02 11:18:10 crc kubenswrapper[4925]: I0202 11:18:10.049491 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" event={"ID":"abfcab41-3119-4ec5-94ac-32e949f0de93","Type":"ContainerStarted","Data":"c72a629806592b7e3c7782bfb19c27ee4a6bc469b5bf289034bd2d3511998c13"} Feb 02 11:18:10 crc kubenswrapper[4925]: I0202 11:18:10.049600 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:10 crc kubenswrapper[4925]: I0202 11:18:10.079634 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" podStartSLOduration=5.079613666 podStartE2EDuration="5.079613666s" podCreationTimestamp="2026-02-02 11:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:18:10.072589009 +0000 UTC m=+1267.076837971" watchObservedRunningTime="2026-02-02 11:18:10.079613666 +0000 UTC m=+1267.083862628" Feb 02 11:18:10 crc kubenswrapper[4925]: I0202 11:18:10.677692 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e6de27-1a35-4fc7-8fc9-ca063454daa2" path="/var/lib/kubelet/pods/79e6de27-1a35-4fc7-8fc9-ca063454daa2/volumes" Feb 02 11:18:16 crc kubenswrapper[4925]: I0202 11:18:16.169113 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:18:16 crc kubenswrapper[4925]: I0202 11:18:16.247442 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-f6t4k"] Feb 02 11:18:16 crc kubenswrapper[4925]: I0202 11:18:16.247892 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" podUID="3b31fe1a-24c3-4692-bf53-f2a6c63c8444" containerName="dnsmasq-dns" containerID="cri-o://cf54e11f7337d7a45e3e1be1098802c0309d9080bca23a51b31f024b19a1a0c3" gracePeriod=10 Feb 02 11:18:18 crc kubenswrapper[4925]: I0202 11:18:18.105303 4925 generic.go:334] "Generic (PLEG): container finished" podID="3b31fe1a-24c3-4692-bf53-f2a6c63c8444" containerID="cf54e11f7337d7a45e3e1be1098802c0309d9080bca23a51b31f024b19a1a0c3" exitCode=0 Feb 02 11:18:18 crc kubenswrapper[4925]: I0202 11:18:18.105392 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" event={"ID":"3b31fe1a-24c3-4692-bf53-f2a6c63c8444","Type":"ContainerDied","Data":"cf54e11f7337d7a45e3e1be1098802c0309d9080bca23a51b31f024b19a1a0c3"} Feb 02 11:18:19 crc kubenswrapper[4925]: I0202 11:18:19.070660 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" podUID="3b31fe1a-24c3-4692-bf53-f2a6c63c8444" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Feb 02 11:18:20 crc kubenswrapper[4925]: E0202 11:18:20.746454 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 02 11:18:20 crc kubenswrapper[4925]: E0202 11:18:20.747228 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n586h578h6h7h79h585h646hb7h7ch65chd5h598h5d8h674h9h575h64bh65chbh5bchcdh5dh54fhc4h684h548h564h655h5f5hch5dbh88q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfx92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(06f0a3f9-6761-457e-aeda-efcf1d326211): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:18:24 crc kubenswrapper[4925]: I0202 11:18:24.069950 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" podUID="3b31fe1a-24c3-4692-bf53-f2a6c63c8444" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Feb 02 11:18:25 crc kubenswrapper[4925]: I0202 11:18:25.160974 4925 generic.go:334] "Generic (PLEG): container finished" podID="60beee8c-66a8-426f-a911-91d108e9f12d" containerID="04bf78182aa9ebc59472bda9dd173eb78f57f80908346051af91c2ca8274c61a" exitCode=0 Feb 02 11:18:25 crc kubenswrapper[4925]: I0202 11:18:25.161021 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rjbr7" event={"ID":"60beee8c-66a8-426f-a911-91d108e9f12d","Type":"ContainerDied","Data":"04bf78182aa9ebc59472bda9dd173eb78f57f80908346051af91c2ca8274c61a"} Feb 02 11:18:25 crc kubenswrapper[4925]: E0202 11:18:25.808941 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 02 11:18:25 crc kubenswrapper[4925]: E0202 11:18:25.809283 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h9qz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-xxnvr_openstack(20d81564-431d-40c5-be81-3961fab3e8b8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:18:25 crc kubenswrapper[4925]: E0202 11:18:25.810400 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-xxnvr" podUID="20d81564-431d-40c5-be81-3961fab3e8b8" Feb 02 11:18:26 crc kubenswrapper[4925]: E0202 11:18:26.171969 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-xxnvr" podUID="20d81564-431d-40c5-be81-3961fab3e8b8" Feb 02 11:18:26 crc kubenswrapper[4925]: E0202 11:18:26.308344 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:1a16efc577f4903134a7f775fae7101cfa67e7f74b5de90cbf341efb2005c6b0: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-cinder-api/blobs/sha256:1a16efc577f4903134a7f775fae7101cfa67e7f74b5de90cbf341efb2005c6b0\": context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 02 11:18:26 crc kubenswrapper[4925]: E0202 11:18:26.308496 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hxzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-d8tqm_openstack(b26690f1-3d10-4ef3-a16a-7c33dc1c62c0): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:1a16efc577f4903134a7f775fae7101cfa67e7f74b5de90cbf341efb2005c6b0: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-cinder-api/blobs/sha256:1a16efc577f4903134a7f775fae7101cfa67e7f74b5de90cbf341efb2005c6b0\": context canceled" logger="UnhandledError" Feb 02 11:18:26 crc kubenswrapper[4925]: E0202 11:18:26.309878 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:1a16efc577f4903134a7f775fae7101cfa67e7f74b5de90cbf341efb2005c6b0: Get \\\"https://quay.io/v2/podified-antelope-centos9/openstack-cinder-api/blobs/sha256:1a16efc577f4903134a7f775fae7101cfa67e7f74b5de90cbf341efb2005c6b0\\\": context canceled\"" pod="openstack/cinder-db-sync-d8tqm" podUID="b26690f1-3d10-4ef3-a16a-7c33dc1c62c0" Feb 02 11:18:26 crc kubenswrapper[4925]: E0202 11:18:26.328637 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 02 11:18:26 crc kubenswrapper[4925]: E0202 11:18:26.328783 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dx9wg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-gz8kp_openstack(15e0ab2c-a590-4b39-af8b-a055a29f01c0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:18:26 crc kubenswrapper[4925]: E0202 11:18:26.330186 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-gz8kp" podUID="15e0ab2c-a590-4b39-af8b-a055a29f01c0" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.388680 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.500603 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-config\") pod \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.500756 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-ovsdbserver-nb\") pod \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.500799 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-dns-svc\") pod \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.500827 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-ovsdbserver-sb\") pod \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.500874 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs57g\" (UniqueName: \"kubernetes.io/projected/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-kube-api-access-zs57g\") pod \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\" (UID: \"3b31fe1a-24c3-4692-bf53-f2a6c63c8444\") " Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.507395 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-kube-api-access-zs57g" (OuterVolumeSpecName: "kube-api-access-zs57g") pod "3b31fe1a-24c3-4692-bf53-f2a6c63c8444" (UID: "3b31fe1a-24c3-4692-bf53-f2a6c63c8444"). InnerVolumeSpecName "kube-api-access-zs57g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.547192 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3b31fe1a-24c3-4692-bf53-f2a6c63c8444" (UID: "3b31fe1a-24c3-4692-bf53-f2a6c63c8444"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.547458 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3b31fe1a-24c3-4692-bf53-f2a6c63c8444" (UID: "3b31fe1a-24c3-4692-bf53-f2a6c63c8444"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.551802 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-config" (OuterVolumeSpecName: "config") pod "3b31fe1a-24c3-4692-bf53-f2a6c63c8444" (UID: "3b31fe1a-24c3-4692-bf53-f2a6c63c8444"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.552501 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3b31fe1a-24c3-4692-bf53-f2a6c63c8444" (UID: "3b31fe1a-24c3-4692-bf53-f2a6c63c8444"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.602605 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.602639 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.602653 4925 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.602661 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.602671 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs57g\" (UniqueName: \"kubernetes.io/projected/3b31fe1a-24c3-4692-bf53-f2a6c63c8444-kube-api-access-zs57g\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.723809 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.911499 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-credential-keys\") pod \"60beee8c-66a8-426f-a911-91d108e9f12d\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.911553 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-combined-ca-bundle\") pod \"60beee8c-66a8-426f-a911-91d108e9f12d\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.911621 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lhx8\" (UniqueName: \"kubernetes.io/projected/60beee8c-66a8-426f-a911-91d108e9f12d-kube-api-access-9lhx8\") pod \"60beee8c-66a8-426f-a911-91d108e9f12d\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.911759 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-config-data\") pod \"60beee8c-66a8-426f-a911-91d108e9f12d\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.911810 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-fernet-keys\") pod \"60beee8c-66a8-426f-a911-91d108e9f12d\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.911830 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-scripts\") pod \"60beee8c-66a8-426f-a911-91d108e9f12d\" (UID: \"60beee8c-66a8-426f-a911-91d108e9f12d\") " Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.916826 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60beee8c-66a8-426f-a911-91d108e9f12d-kube-api-access-9lhx8" (OuterVolumeSpecName: "kube-api-access-9lhx8") pod "60beee8c-66a8-426f-a911-91d108e9f12d" (UID: "60beee8c-66a8-426f-a911-91d108e9f12d"). InnerVolumeSpecName "kube-api-access-9lhx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.918251 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "60beee8c-66a8-426f-a911-91d108e9f12d" (UID: "60beee8c-66a8-426f-a911-91d108e9f12d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.918291 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "60beee8c-66a8-426f-a911-91d108e9f12d" (UID: "60beee8c-66a8-426f-a911-91d108e9f12d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.918433 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-scripts" (OuterVolumeSpecName: "scripts") pod "60beee8c-66a8-426f-a911-91d108e9f12d" (UID: "60beee8c-66a8-426f-a911-91d108e9f12d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.935559 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60beee8c-66a8-426f-a911-91d108e9f12d" (UID: "60beee8c-66a8-426f-a911-91d108e9f12d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:26 crc kubenswrapper[4925]: I0202 11:18:26.940286 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-config-data" (OuterVolumeSpecName: "config-data") pod "60beee8c-66a8-426f-a911-91d108e9f12d" (UID: "60beee8c-66a8-426f-a911-91d108e9f12d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.015316 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.015644 4925 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.015733 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.015895 4925 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.015978 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60beee8c-66a8-426f-a911-91d108e9f12d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.016058 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lhx8\" (UniqueName: \"kubernetes.io/projected/60beee8c-66a8-426f-a911-91d108e9f12d-kube-api-access-9lhx8\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.028055 4925 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.176980 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" event={"ID":"3b31fe1a-24c3-4692-bf53-f2a6c63c8444","Type":"ContainerDied","Data":"35f24accee2c869132c88ea243c51994767a800d9dbbf1d123e982a555190ccb"} Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.177007 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-f6t4k" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.177034 4925 scope.go:117] "RemoveContainer" containerID="cf54e11f7337d7a45e3e1be1098802c0309d9080bca23a51b31f024b19a1a0c3" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.178709 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f0a3f9-6761-457e-aeda-efcf1d326211","Type":"ContainerStarted","Data":"6eacc1bc3bd3cb09a7948ac17f19496fff17a9fcfb2401e676264f20786cd75f"} Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.180527 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rjbr7" event={"ID":"60beee8c-66a8-426f-a911-91d108e9f12d","Type":"ContainerDied","Data":"c1e3c213903d327eae1f1e851d8f7d8b15e026d760faad28aadb8224af8d7f12"} Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.180553 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1e3c213903d327eae1f1e851d8f7d8b15e026d760faad28aadb8224af8d7f12" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.180612 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rjbr7" Feb 02 11:18:27 crc kubenswrapper[4925]: E0202 11:18:27.182598 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-gz8kp" podUID="15e0ab2c-a590-4b39-af8b-a055a29f01c0" Feb 02 11:18:27 crc kubenswrapper[4925]: E0202 11:18:27.182787 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-d8tqm" podUID="b26690f1-3d10-4ef3-a16a-7c33dc1c62c0" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.224396 4925 scope.go:117] "RemoveContainer" containerID="b7b93ab4c3abb5a23a756b45ee4b9b7e55152b8388d782444d1e2229db24ce33" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.267426 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-f6t4k"] Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.275292 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-f6t4k"] Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.285224 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rjbr7"] Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.291504 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rjbr7"] Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.355370 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nzrtg"] Feb 02 11:18:27 crc kubenswrapper[4925]: E0202 11:18:27.355738 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e6de27-1a35-4fc7-8fc9-ca063454daa2" containerName="init" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.355762 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e6de27-1a35-4fc7-8fc9-ca063454daa2" containerName="init" Feb 02 11:18:27 crc kubenswrapper[4925]: E0202 11:18:27.355787 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b31fe1a-24c3-4692-bf53-f2a6c63c8444" containerName="init" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.355796 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b31fe1a-24c3-4692-bf53-f2a6c63c8444" containerName="init" Feb 02 11:18:27 crc kubenswrapper[4925]: E0202 11:18:27.355807 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60beee8c-66a8-426f-a911-91d108e9f12d" containerName="keystone-bootstrap" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.355816 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="60beee8c-66a8-426f-a911-91d108e9f12d" containerName="keystone-bootstrap" Feb 02 11:18:27 crc kubenswrapper[4925]: E0202 11:18:27.355824 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b31fe1a-24c3-4692-bf53-f2a6c63c8444" containerName="dnsmasq-dns" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.355831 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b31fe1a-24c3-4692-bf53-f2a6c63c8444" containerName="dnsmasq-dns" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.356017 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="60beee8c-66a8-426f-a911-91d108e9f12d" containerName="keystone-bootstrap" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.356035 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e6de27-1a35-4fc7-8fc9-ca063454daa2" containerName="init" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.356053 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b31fe1a-24c3-4692-bf53-f2a6c63c8444" containerName="dnsmasq-dns" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.356708 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.359690 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.359758 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 11:18:27 crc kubenswrapper[4925]: W0202 11:18:27.359693 4925 reflector.go:561] object-"openstack"/"osp-secret": failed to list *v1.Secret: secrets "osp-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 02 11:18:27 crc kubenswrapper[4925]: E0202 11:18:27.359977 4925 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"osp-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"osp-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.360108 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.363884 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wkg8f" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.367149 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nzrtg"] Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.524239 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvmws\" (UniqueName: \"kubernetes.io/projected/b4d7c58b-1da7-468e-98a5-910467641690-kube-api-access-pvmws\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.524323 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-combined-ca-bundle\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.524347 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-scripts\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.524397 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-fernet-keys\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.524414 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-config-data\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.524444 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-credential-keys\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.625358 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-fernet-keys\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.625407 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-config-data\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.625446 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-credential-keys\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.625499 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvmws\" (UniqueName: \"kubernetes.io/projected/b4d7c58b-1da7-468e-98a5-910467641690-kube-api-access-pvmws\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.625549 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-combined-ca-bundle\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.625565 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-scripts\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.639016 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-credential-keys\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.639153 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-scripts\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.639165 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-fernet-keys\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.639382 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-combined-ca-bundle\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.639400 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-config-data\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.642324 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvmws\" (UniqueName: \"kubernetes.io/projected/b4d7c58b-1da7-468e-98a5-910467641690-kube-api-access-pvmws\") pod \"keystone-bootstrap-nzrtg\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:27 crc kubenswrapper[4925]: I0202 11:18:27.672250 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:28 crc kubenswrapper[4925]: I0202 11:18:28.101726 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nzrtg"] Feb 02 11:18:28 crc kubenswrapper[4925]: W0202 11:18:28.111241 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4d7c58b_1da7_468e_98a5_910467641690.slice/crio-911c4d62ab72bf48dbe808a9bff56031334fca77aa1dceb5a0403a760269de0b WatchSource:0}: Error finding container 911c4d62ab72bf48dbe808a9bff56031334fca77aa1dceb5a0403a760269de0b: Status 404 returned error can't find the container with id 911c4d62ab72bf48dbe808a9bff56031334fca77aa1dceb5a0403a760269de0b Feb 02 11:18:28 crc kubenswrapper[4925]: I0202 11:18:28.184263 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 11:18:28 crc kubenswrapper[4925]: I0202 11:18:28.188418 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nzrtg" event={"ID":"b4d7c58b-1da7-468e-98a5-910467641690","Type":"ContainerStarted","Data":"911c4d62ab72bf48dbe808a9bff56031334fca77aa1dceb5a0403a760269de0b"} Feb 02 11:18:28 crc kubenswrapper[4925]: I0202 11:18:28.674373 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b31fe1a-24c3-4692-bf53-f2a6c63c8444" path="/var/lib/kubelet/pods/3b31fe1a-24c3-4692-bf53-f2a6c63c8444/volumes" Feb 02 11:18:28 crc kubenswrapper[4925]: I0202 11:18:28.675529 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60beee8c-66a8-426f-a911-91d108e9f12d" path="/var/lib/kubelet/pods/60beee8c-66a8-426f-a911-91d108e9f12d/volumes" Feb 02 11:18:29 crc kubenswrapper[4925]: I0202 11:18:29.198887 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nzrtg" event={"ID":"b4d7c58b-1da7-468e-98a5-910467641690","Type":"ContainerStarted","Data":"a4b858bb2109379e1e27e59e23882e68bf3438b4f519bcfb874fe48ebcbd675b"} Feb 02 11:18:38 crc kubenswrapper[4925]: I0202 11:18:38.682270 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nzrtg" podStartSLOduration=11.682251746 podStartE2EDuration="11.682251746s" podCreationTimestamp="2026-02-02 11:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:18:29.216836227 +0000 UTC m=+1286.221085189" watchObservedRunningTime="2026-02-02 11:18:38.682251746 +0000 UTC m=+1295.686500708" Feb 02 11:18:40 crc kubenswrapper[4925]: I0202 11:18:40.291785 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f0a3f9-6761-457e-aeda-efcf1d326211","Type":"ContainerStarted","Data":"4c7bbfe107524d5629e25960b58b11be93232cffa9639b838da4407cae619de7"} Feb 02 11:18:40 crc kubenswrapper[4925]: I0202 11:18:40.295020 4925 generic.go:334] "Generic (PLEG): container finished" podID="b4d7c58b-1da7-468e-98a5-910467641690" containerID="a4b858bb2109379e1e27e59e23882e68bf3438b4f519bcfb874fe48ebcbd675b" exitCode=0 Feb 02 11:18:40 crc kubenswrapper[4925]: I0202 11:18:40.295068 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nzrtg" event={"ID":"b4d7c58b-1da7-468e-98a5-910467641690","Type":"ContainerDied","Data":"a4b858bb2109379e1e27e59e23882e68bf3438b4f519bcfb874fe48ebcbd675b"} Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.646010 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.681455 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-fernet-keys\") pod \"b4d7c58b-1da7-468e-98a5-910467641690\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.681540 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-credential-keys\") pod \"b4d7c58b-1da7-468e-98a5-910467641690\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.681588 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-combined-ca-bundle\") pod \"b4d7c58b-1da7-468e-98a5-910467641690\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.681623 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-scripts\") pod \"b4d7c58b-1da7-468e-98a5-910467641690\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.681684 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-config-data\") pod \"b4d7c58b-1da7-468e-98a5-910467641690\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.681725 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvmws\" (UniqueName: \"kubernetes.io/projected/b4d7c58b-1da7-468e-98a5-910467641690-kube-api-access-pvmws\") pod \"b4d7c58b-1da7-468e-98a5-910467641690\" (UID: \"b4d7c58b-1da7-468e-98a5-910467641690\") " Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.695634 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b4d7c58b-1da7-468e-98a5-910467641690" (UID: "b4d7c58b-1da7-468e-98a5-910467641690"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.695957 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-scripts" (OuterVolumeSpecName: "scripts") pod "b4d7c58b-1da7-468e-98a5-910467641690" (UID: "b4d7c58b-1da7-468e-98a5-910467641690"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.696219 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b4d7c58b-1da7-468e-98a5-910467641690" (UID: "b4d7c58b-1da7-468e-98a5-910467641690"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.696460 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d7c58b-1da7-468e-98a5-910467641690-kube-api-access-pvmws" (OuterVolumeSpecName: "kube-api-access-pvmws") pod "b4d7c58b-1da7-468e-98a5-910467641690" (UID: "b4d7c58b-1da7-468e-98a5-910467641690"). InnerVolumeSpecName "kube-api-access-pvmws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.715182 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-config-data" (OuterVolumeSpecName: "config-data") pod "b4d7c58b-1da7-468e-98a5-910467641690" (UID: "b4d7c58b-1da7-468e-98a5-910467641690"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.715279 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4d7c58b-1da7-468e-98a5-910467641690" (UID: "b4d7c58b-1da7-468e-98a5-910467641690"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.783568 4925 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.783602 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.783613 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.783621 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.783630 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvmws\" (UniqueName: \"kubernetes.io/projected/b4d7c58b-1da7-468e-98a5-910467641690-kube-api-access-pvmws\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:41.783639 4925 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4d7c58b-1da7-468e-98a5-910467641690-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.319249 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nzrtg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.319454 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nzrtg" event={"ID":"b4d7c58b-1da7-468e-98a5-910467641690","Type":"ContainerDied","Data":"911c4d62ab72bf48dbe808a9bff56031334fca77aa1dceb5a0403a760269de0b"} Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.320221 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="911c4d62ab72bf48dbe808a9bff56031334fca77aa1dceb5a0403a760269de0b" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.326973 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xxnvr" event={"ID":"20d81564-431d-40c5-be81-3961fab3e8b8","Type":"ContainerStarted","Data":"6da9662bb56f1903a4f12e6a7190939a797d70d67bc24d74d3fa653893762626"} Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.354378 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-xxnvr" podStartSLOduration=2.9394637 podStartE2EDuration="37.354363845s" podCreationTimestamp="2026-02-02 11:18:05 +0000 UTC" firstStartedPulling="2026-02-02 11:18:06.957554659 +0000 UTC m=+1263.961803621" lastFinishedPulling="2026-02-02 11:18:41.372454814 +0000 UTC m=+1298.376703766" observedRunningTime="2026-02-02 11:18:42.349391082 +0000 UTC m=+1299.353640044" watchObservedRunningTime="2026-02-02 11:18:42.354363845 +0000 UTC m=+1299.358612807" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.412636 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7878d757f7-z5tzg"] Feb 02 11:18:42 crc kubenswrapper[4925]: E0202 11:18:42.414764 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d7c58b-1da7-468e-98a5-910467641690" containerName="keystone-bootstrap" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.414785 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d7c58b-1da7-468e-98a5-910467641690" containerName="keystone-bootstrap" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.414977 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d7c58b-1da7-468e-98a5-910467641690" containerName="keystone-bootstrap" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.415609 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.426245 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7878d757f7-z5tzg"] Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.432880 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.433229 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.433795 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.433901 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wkg8f" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.434003 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.437604 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.542404 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-config-data\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.542473 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-public-tls-certs\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.542537 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-fernet-keys\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.542574 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-credential-keys\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.542624 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-combined-ca-bundle\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.542660 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-internal-tls-certs\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.542704 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvcqf\" (UniqueName: \"kubernetes.io/projected/c10f0dec-2709-40e9-90ce-ad8698d98599-kube-api-access-bvcqf\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.542757 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-scripts\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.643629 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-credential-keys\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.643686 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-combined-ca-bundle\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.643724 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-internal-tls-certs\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.643760 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvcqf\" (UniqueName: \"kubernetes.io/projected/c10f0dec-2709-40e9-90ce-ad8698d98599-kube-api-access-bvcqf\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.643799 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-scripts\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.643819 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-config-data\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.643842 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-public-tls-certs\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.643881 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-fernet-keys\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.648391 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-scripts\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.649174 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-credential-keys\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.650164 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-config-data\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.650463 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-combined-ca-bundle\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.650455 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-fernet-keys\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.651320 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-public-tls-certs\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.654046 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c10f0dec-2709-40e9-90ce-ad8698d98599-internal-tls-certs\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.660722 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvcqf\" (UniqueName: \"kubernetes.io/projected/c10f0dec-2709-40e9-90ce-ad8698d98599-kube-api-access-bvcqf\") pod \"keystone-7878d757f7-z5tzg\" (UID: \"c10f0dec-2709-40e9-90ce-ad8698d98599\") " pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:42 crc kubenswrapper[4925]: I0202 11:18:42.765106 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:43 crc kubenswrapper[4925]: I0202 11:18:43.211178 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7878d757f7-z5tzg"] Feb 02 11:18:43 crc kubenswrapper[4925]: W0202 11:18:43.325270 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc10f0dec_2709_40e9_90ce_ad8698d98599.slice/crio-45fff7ec86f5bb13d9e46a8c64b359407b934614f482a6d9c2720b3280c36611 WatchSource:0}: Error finding container 45fff7ec86f5bb13d9e46a8c64b359407b934614f482a6d9c2720b3280c36611: Status 404 returned error can't find the container with id 45fff7ec86f5bb13d9e46a8c64b359407b934614f482a6d9c2720b3280c36611 Feb 02 11:18:43 crc kubenswrapper[4925]: I0202 11:18:43.348220 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7878d757f7-z5tzg" event={"ID":"c10f0dec-2709-40e9-90ce-ad8698d98599","Type":"ContainerStarted","Data":"45fff7ec86f5bb13d9e46a8c64b359407b934614f482a6d9c2720b3280c36611"} Feb 02 11:18:44 crc kubenswrapper[4925]: I0202 11:18:44.367114 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7878d757f7-z5tzg" event={"ID":"c10f0dec-2709-40e9-90ce-ad8698d98599","Type":"ContainerStarted","Data":"d0d5f5f9cfe0edd4881a3f6a091be1ad45eca0fdf308cbdc901f8e2857600730"} Feb 02 11:18:44 crc kubenswrapper[4925]: I0202 11:18:44.367477 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:18:44 crc kubenswrapper[4925]: I0202 11:18:44.370298 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gz8kp" event={"ID":"15e0ab2c-a590-4b39-af8b-a055a29f01c0","Type":"ContainerStarted","Data":"18f734283bcd663a6e17b14ab37d7800408cddea2006d707687829dfdf661294"} Feb 02 11:18:44 crc kubenswrapper[4925]: I0202 11:18:44.393852 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7878d757f7-z5tzg" podStartSLOduration=2.393831445 podStartE2EDuration="2.393831445s" podCreationTimestamp="2026-02-02 11:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:18:44.388979956 +0000 UTC m=+1301.393228928" watchObservedRunningTime="2026-02-02 11:18:44.393831445 +0000 UTC m=+1301.398080407" Feb 02 11:18:44 crc kubenswrapper[4925]: I0202 11:18:44.411033 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-gz8kp" podStartSLOduration=2.882309914 podStartE2EDuration="39.411012773s" podCreationTimestamp="2026-02-02 11:18:05 +0000 UTC" firstStartedPulling="2026-02-02 11:18:06.864111676 +0000 UTC m=+1263.868360638" lastFinishedPulling="2026-02-02 11:18:43.392814535 +0000 UTC m=+1300.397063497" observedRunningTime="2026-02-02 11:18:44.410229742 +0000 UTC m=+1301.414478724" watchObservedRunningTime="2026-02-02 11:18:44.411012773 +0000 UTC m=+1301.415261735" Feb 02 11:19:15 crc kubenswrapper[4925]: I0202 11:19:15.343843 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7878d757f7-z5tzg" Feb 02 11:19:15 crc kubenswrapper[4925]: I0202 11:19:15.901899 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 11:19:15 crc kubenswrapper[4925]: I0202 11:19:15.905432 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 11:19:15 crc kubenswrapper[4925]: I0202 11:19:15.910912 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 11:19:15 crc kubenswrapper[4925]: I0202 11:19:15.911448 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 02 11:19:15 crc kubenswrapper[4925]: I0202 11:19:15.911849 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 02 11:19:15 crc kubenswrapper[4925]: I0202 11:19:15.913197 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-t6l5h" Feb 02 11:19:16 crc kubenswrapper[4925]: I0202 11:19:16.047168 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/515789df-211a-4465-8f1f-5ab3dadcb813-openstack-config-secret\") pod \"openstackclient\" (UID: \"515789df-211a-4465-8f1f-5ab3dadcb813\") " pod="openstack/openstackclient" Feb 02 11:19:16 crc kubenswrapper[4925]: I0202 11:19:16.047219 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/515789df-211a-4465-8f1f-5ab3dadcb813-openstack-config\") pod \"openstackclient\" (UID: \"515789df-211a-4465-8f1f-5ab3dadcb813\") " pod="openstack/openstackclient" Feb 02 11:19:16 crc kubenswrapper[4925]: I0202 11:19:16.047294 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h68jt\" (UniqueName: \"kubernetes.io/projected/515789df-211a-4465-8f1f-5ab3dadcb813-kube-api-access-h68jt\") pod \"openstackclient\" (UID: \"515789df-211a-4465-8f1f-5ab3dadcb813\") " pod="openstack/openstackclient" Feb 02 11:19:16 crc kubenswrapper[4925]: I0202 11:19:16.047325 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515789df-211a-4465-8f1f-5ab3dadcb813-combined-ca-bundle\") pod \"openstackclient\" (UID: \"515789df-211a-4465-8f1f-5ab3dadcb813\") " pod="openstack/openstackclient" Feb 02 11:19:16 crc kubenswrapper[4925]: I0202 11:19:16.149020 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h68jt\" (UniqueName: \"kubernetes.io/projected/515789df-211a-4465-8f1f-5ab3dadcb813-kube-api-access-h68jt\") pod \"openstackclient\" (UID: \"515789df-211a-4465-8f1f-5ab3dadcb813\") " pod="openstack/openstackclient" Feb 02 11:19:16 crc kubenswrapper[4925]: I0202 11:19:16.149107 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515789df-211a-4465-8f1f-5ab3dadcb813-combined-ca-bundle\") pod \"openstackclient\" (UID: \"515789df-211a-4465-8f1f-5ab3dadcb813\") " pod="openstack/openstackclient" Feb 02 11:19:16 crc kubenswrapper[4925]: I0202 11:19:16.149213 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/515789df-211a-4465-8f1f-5ab3dadcb813-openstack-config-secret\") pod \"openstackclient\" (UID: \"515789df-211a-4465-8f1f-5ab3dadcb813\") " pod="openstack/openstackclient" Feb 02 11:19:16 crc kubenswrapper[4925]: I0202 11:19:16.149241 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/515789df-211a-4465-8f1f-5ab3dadcb813-openstack-config\") pod \"openstackclient\" (UID: \"515789df-211a-4465-8f1f-5ab3dadcb813\") " pod="openstack/openstackclient" Feb 02 11:19:16 crc kubenswrapper[4925]: I0202 11:19:16.259188 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/515789df-211a-4465-8f1f-5ab3dadcb813-openstack-config\") pod \"openstackclient\" (UID: \"515789df-211a-4465-8f1f-5ab3dadcb813\") " pod="openstack/openstackclient" Feb 02 11:19:16 crc kubenswrapper[4925]: I0202 11:19:16.267585 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/515789df-211a-4465-8f1f-5ab3dadcb813-openstack-config-secret\") pod \"openstackclient\" (UID: \"515789df-211a-4465-8f1f-5ab3dadcb813\") " pod="openstack/openstackclient" Feb 02 11:19:16 crc kubenswrapper[4925]: I0202 11:19:16.267728 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h68jt\" (UniqueName: \"kubernetes.io/projected/515789df-211a-4465-8f1f-5ab3dadcb813-kube-api-access-h68jt\") pod \"openstackclient\" (UID: \"515789df-211a-4465-8f1f-5ab3dadcb813\") " pod="openstack/openstackclient" Feb 02 11:19:16 crc kubenswrapper[4925]: I0202 11:19:16.268890 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515789df-211a-4465-8f1f-5ab3dadcb813-combined-ca-bundle\") pod \"openstackclient\" (UID: \"515789df-211a-4465-8f1f-5ab3dadcb813\") " pod="openstack/openstackclient" Feb 02 11:19:16 crc kubenswrapper[4925]: I0202 11:19:16.521896 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 11:19:16 crc kubenswrapper[4925]: E0202 11:19:16.976637 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 02 11:19:16 crc kubenswrapper[4925]: E0202 11:19:16.976837 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hxzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-d8tqm_openstack(b26690f1-3d10-4ef3-a16a-7c33dc1c62c0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:19:16 crc kubenswrapper[4925]: E0202 11:19:16.978020 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-d8tqm" podUID="b26690f1-3d10-4ef3-a16a-7c33dc1c62c0" Feb 02 11:19:17 crc kubenswrapper[4925]: E0202 11:19:17.472109 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 02 11:19:17 crc kubenswrapper[4925]: E0202 11:19:17.472630 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfx92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(06f0a3f9-6761-457e-aeda-efcf1d326211): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 11:19:17 crc kubenswrapper[4925]: E0202 11:19:17.473815 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="06f0a3f9-6761-457e-aeda-efcf1d326211" Feb 02 11:19:17 crc kubenswrapper[4925]: I0202 11:19:17.501425 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 11:19:17 crc kubenswrapper[4925]: W0202 11:19:17.506127 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod515789df_211a_4465_8f1f_5ab3dadcb813.slice/crio-65bd2bfa9d405c5e7e425ae71de3dd9d1b30a4f336510e1d3e6af5a39b5ffd44 WatchSource:0}: Error finding container 65bd2bfa9d405c5e7e425ae71de3dd9d1b30a4f336510e1d3e6af5a39b5ffd44: Status 404 returned error can't find the container with id 65bd2bfa9d405c5e7e425ae71de3dd9d1b30a4f336510e1d3e6af5a39b5ffd44 Feb 02 11:19:17 crc kubenswrapper[4925]: I0202 11:19:17.674533 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"515789df-211a-4465-8f1f-5ab3dadcb813","Type":"ContainerStarted","Data":"65bd2bfa9d405c5e7e425ae71de3dd9d1b30a4f336510e1d3e6af5a39b5ffd44"} Feb 02 11:19:17 crc kubenswrapper[4925]: I0202 11:19:17.674722 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06f0a3f9-6761-457e-aeda-efcf1d326211" containerName="ceilometer-notification-agent" containerID="cri-o://6eacc1bc3bd3cb09a7948ac17f19496fff17a9fcfb2401e676264f20786cd75f" gracePeriod=30 Feb 02 11:19:17 crc kubenswrapper[4925]: I0202 11:19:17.674844 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06f0a3f9-6761-457e-aeda-efcf1d326211" containerName="sg-core" containerID="cri-o://4c7bbfe107524d5629e25960b58b11be93232cffa9639b838da4407cae619de7" gracePeriod=30 Feb 02 11:19:18 crc kubenswrapper[4925]: I0202 11:19:18.685734 4925 generic.go:334] "Generic (PLEG): container finished" podID="06f0a3f9-6761-457e-aeda-efcf1d326211" containerID="4c7bbfe107524d5629e25960b58b11be93232cffa9639b838da4407cae619de7" exitCode=2 Feb 02 11:19:18 crc kubenswrapper[4925]: I0202 11:19:18.685813 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f0a3f9-6761-457e-aeda-efcf1d326211","Type":"ContainerDied","Data":"4c7bbfe107524d5629e25960b58b11be93232cffa9639b838da4407cae619de7"} Feb 02 11:19:21 crc kubenswrapper[4925]: I0202 11:19:21.717826 4925 generic.go:334] "Generic (PLEG): container finished" podID="06f0a3f9-6761-457e-aeda-efcf1d326211" containerID="6eacc1bc3bd3cb09a7948ac17f19496fff17a9fcfb2401e676264f20786cd75f" exitCode=0 Feb 02 11:19:21 crc kubenswrapper[4925]: I0202 11:19:21.717889 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f0a3f9-6761-457e-aeda-efcf1d326211","Type":"ContainerDied","Data":"6eacc1bc3bd3cb09a7948ac17f19496fff17a9fcfb2401e676264f20786cd75f"} Feb 02 11:19:25 crc kubenswrapper[4925]: I0202 11:19:25.612319 4925 scope.go:117] "RemoveContainer" containerID="eddaa1ec739d09d78fa3226e7024bea619921a2fed0e9c5119b79ed57d73ce4f" Feb 02 11:19:25 crc kubenswrapper[4925]: I0202 11:19:25.805409 4925 scope.go:117] "RemoveContainer" containerID="c215ae3cdb2d60bdf907d5616c33976fa15fdd4ef857b4e0f0cec2bcf1ac654a" Feb 02 11:19:25 crc kubenswrapper[4925]: I0202 11:19:25.946554 4925 scope.go:117] "RemoveContainer" containerID="27e3e62e6a04c1c6c968b132bca12fd8cc5f5fbdc2a08725809ff8583632791d" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.162659 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.234783 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f0a3f9-6761-457e-aeda-efcf1d326211-log-httpd\") pod \"06f0a3f9-6761-457e-aeda-efcf1d326211\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.234894 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f0a3f9-6761-457e-aeda-efcf1d326211-run-httpd\") pod \"06f0a3f9-6761-457e-aeda-efcf1d326211\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.234927 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-scripts\") pod \"06f0a3f9-6761-457e-aeda-efcf1d326211\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.234954 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfx92\" (UniqueName: \"kubernetes.io/projected/06f0a3f9-6761-457e-aeda-efcf1d326211-kube-api-access-dfx92\") pod \"06f0a3f9-6761-457e-aeda-efcf1d326211\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.235052 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-config-data\") pod \"06f0a3f9-6761-457e-aeda-efcf1d326211\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.235162 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-combined-ca-bundle\") pod \"06f0a3f9-6761-457e-aeda-efcf1d326211\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.235220 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-sg-core-conf-yaml\") pod \"06f0a3f9-6761-457e-aeda-efcf1d326211\" (UID: \"06f0a3f9-6761-457e-aeda-efcf1d326211\") " Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.238526 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f0a3f9-6761-457e-aeda-efcf1d326211-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "06f0a3f9-6761-457e-aeda-efcf1d326211" (UID: "06f0a3f9-6761-457e-aeda-efcf1d326211"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.238888 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f0a3f9-6761-457e-aeda-efcf1d326211-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "06f0a3f9-6761-457e-aeda-efcf1d326211" (UID: "06f0a3f9-6761-457e-aeda-efcf1d326211"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.239507 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-scripts" (OuterVolumeSpecName: "scripts") pod "06f0a3f9-6761-457e-aeda-efcf1d326211" (UID: "06f0a3f9-6761-457e-aeda-efcf1d326211"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.240278 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06f0a3f9-6761-457e-aeda-efcf1d326211-kube-api-access-dfx92" (OuterVolumeSpecName: "kube-api-access-dfx92") pod "06f0a3f9-6761-457e-aeda-efcf1d326211" (UID: "06f0a3f9-6761-457e-aeda-efcf1d326211"). InnerVolumeSpecName "kube-api-access-dfx92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.260857 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "06f0a3f9-6761-457e-aeda-efcf1d326211" (UID: "06f0a3f9-6761-457e-aeda-efcf1d326211"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.261554 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06f0a3f9-6761-457e-aeda-efcf1d326211" (UID: "06f0a3f9-6761-457e-aeda-efcf1d326211"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.268671 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-config-data" (OuterVolumeSpecName: "config-data") pod "06f0a3f9-6761-457e-aeda-efcf1d326211" (UID: "06f0a3f9-6761-457e-aeda-efcf1d326211"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.337506 4925 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f0a3f9-6761-457e-aeda-efcf1d326211-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.337542 4925 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f0a3f9-6761-457e-aeda-efcf1d326211-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.337551 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.337560 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfx92\" (UniqueName: \"kubernetes.io/projected/06f0a3f9-6761-457e-aeda-efcf1d326211-kube-api-access-dfx92\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.337572 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.337580 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.337590 4925 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06f0a3f9-6761-457e-aeda-efcf1d326211-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.758915 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f0a3f9-6761-457e-aeda-efcf1d326211","Type":"ContainerDied","Data":"d72713ab9bd768609db8a49371e3d1a7411cd047a840e9763b4c14f41a0f0c4e"} Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.759242 4925 scope.go:117] "RemoveContainer" containerID="4c7bbfe107524d5629e25960b58b11be93232cffa9639b838da4407cae619de7" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.759409 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.762679 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"515789df-211a-4465-8f1f-5ab3dadcb813","Type":"ContainerStarted","Data":"bd56e6453819419ecb1164299458e2f1d9ccbee33465b71359b7015701c66675"} Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.766766 4925 generic.go:334] "Generic (PLEG): container finished" podID="20d81564-431d-40c5-be81-3961fab3e8b8" containerID="6da9662bb56f1903a4f12e6a7190939a797d70d67bc24d74d3fa653893762626" exitCode=0 Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.766817 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xxnvr" event={"ID":"20d81564-431d-40c5-be81-3961fab3e8b8","Type":"ContainerDied","Data":"6da9662bb56f1903a4f12e6a7190939a797d70d67bc24d74d3fa653893762626"} Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.808902 4925 scope.go:117] "RemoveContainer" containerID="6eacc1bc3bd3cb09a7948ac17f19496fff17a9fcfb2401e676264f20786cd75f" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.811598 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.365174786 podStartE2EDuration="11.811577534s" podCreationTimestamp="2026-02-02 11:19:15 +0000 UTC" firstStartedPulling="2026-02-02 11:19:17.508534836 +0000 UTC m=+1334.512783798" lastFinishedPulling="2026-02-02 11:19:25.954937574 +0000 UTC m=+1342.959186546" observedRunningTime="2026-02-02 11:19:26.806401146 +0000 UTC m=+1343.810650108" watchObservedRunningTime="2026-02-02 11:19:26.811577534 +0000 UTC m=+1343.815826496" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.849700 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.856590 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.877350 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:19:26 crc kubenswrapper[4925]: E0202 11:19:26.877786 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f0a3f9-6761-457e-aeda-efcf1d326211" containerName="ceilometer-notification-agent" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.877815 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f0a3f9-6761-457e-aeda-efcf1d326211" containerName="ceilometer-notification-agent" Feb 02 11:19:26 crc kubenswrapper[4925]: E0202 11:19:26.877838 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f0a3f9-6761-457e-aeda-efcf1d326211" containerName="sg-core" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.877846 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f0a3f9-6761-457e-aeda-efcf1d326211" containerName="sg-core" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.878060 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f0a3f9-6761-457e-aeda-efcf1d326211" containerName="sg-core" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.878156 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f0a3f9-6761-457e-aeda-efcf1d326211" containerName="ceilometer-notification-agent" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.879582 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.883778 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.884019 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.894788 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.946717 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb1481b0-f9a0-4094-84da-002dfab54a82-log-httpd\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.946830 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-scripts\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.946862 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb1481b0-f9a0-4094-84da-002dfab54a82-run-httpd\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.946908 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.947144 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-config-data\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.947180 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:26 crc kubenswrapper[4925]: I0202 11:19:26.947258 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7gww\" (UniqueName: \"kubernetes.io/projected/eb1481b0-f9a0-4094-84da-002dfab54a82-kube-api-access-t7gww\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:27 crc kubenswrapper[4925]: I0202 11:19:27.048467 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-config-data\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:27 crc kubenswrapper[4925]: I0202 11:19:27.048517 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:27 crc kubenswrapper[4925]: I0202 11:19:27.048562 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7gww\" (UniqueName: \"kubernetes.io/projected/eb1481b0-f9a0-4094-84da-002dfab54a82-kube-api-access-t7gww\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:27 crc kubenswrapper[4925]: I0202 11:19:27.048615 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb1481b0-f9a0-4094-84da-002dfab54a82-log-httpd\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:27 crc kubenswrapper[4925]: I0202 11:19:27.048675 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-scripts\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:27 crc kubenswrapper[4925]: I0202 11:19:27.048694 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb1481b0-f9a0-4094-84da-002dfab54a82-run-httpd\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:27 crc kubenswrapper[4925]: I0202 11:19:27.048737 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:27 crc kubenswrapper[4925]: I0202 11:19:27.049734 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb1481b0-f9a0-4094-84da-002dfab54a82-log-httpd\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:27 crc kubenswrapper[4925]: I0202 11:19:27.050153 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb1481b0-f9a0-4094-84da-002dfab54a82-run-httpd\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:27 crc kubenswrapper[4925]: I0202 11:19:27.053781 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:27 crc kubenswrapper[4925]: I0202 11:19:27.054159 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-config-data\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:27 crc kubenswrapper[4925]: I0202 11:19:27.054526 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:27 crc kubenswrapper[4925]: I0202 11:19:27.054619 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-scripts\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:27 crc kubenswrapper[4925]: I0202 11:19:27.068026 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7gww\" (UniqueName: \"kubernetes.io/projected/eb1481b0-f9a0-4094-84da-002dfab54a82-kube-api-access-t7gww\") pod \"ceilometer-0\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " pod="openstack/ceilometer-0" Feb 02 11:19:27 crc kubenswrapper[4925]: I0202 11:19:27.203100 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:19:27 crc kubenswrapper[4925]: W0202 11:19:27.656940 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb1481b0_f9a0_4094_84da_002dfab54a82.slice/crio-7e630a5d8db9d582e9cd8f448272dfab8bf3ac7caabfac6257c12f1334cf7682 WatchSource:0}: Error finding container 7e630a5d8db9d582e9cd8f448272dfab8bf3ac7caabfac6257c12f1334cf7682: Status 404 returned error can't find the container with id 7e630a5d8db9d582e9cd8f448272dfab8bf3ac7caabfac6257c12f1334cf7682 Feb 02 11:19:27 crc kubenswrapper[4925]: I0202 11:19:27.664207 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:19:27 crc kubenswrapper[4925]: I0202 11:19:27.776138 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb1481b0-f9a0-4094-84da-002dfab54a82","Type":"ContainerStarted","Data":"7e630a5d8db9d582e9cd8f448272dfab8bf3ac7caabfac6257c12f1334cf7682"} Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.079701 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xxnvr" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.167731 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9qz5\" (UniqueName: \"kubernetes.io/projected/20d81564-431d-40c5-be81-3961fab3e8b8-kube-api-access-h9qz5\") pod \"20d81564-431d-40c5-be81-3961fab3e8b8\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.167868 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-combined-ca-bundle\") pod \"20d81564-431d-40c5-be81-3961fab3e8b8\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.167967 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d81564-431d-40c5-be81-3961fab3e8b8-logs\") pod \"20d81564-431d-40c5-be81-3961fab3e8b8\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.168050 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-config-data\") pod \"20d81564-431d-40c5-be81-3961fab3e8b8\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.168814 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-scripts\") pod \"20d81564-431d-40c5-be81-3961fab3e8b8\" (UID: \"20d81564-431d-40c5-be81-3961fab3e8b8\") " Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.168474 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20d81564-431d-40c5-be81-3961fab3e8b8-logs" (OuterVolumeSpecName: "logs") pod "20d81564-431d-40c5-be81-3961fab3e8b8" (UID: "20d81564-431d-40c5-be81-3961fab3e8b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.174208 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-scripts" (OuterVolumeSpecName: "scripts") pod "20d81564-431d-40c5-be81-3961fab3e8b8" (UID: "20d81564-431d-40c5-be81-3961fab3e8b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.176408 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d81564-431d-40c5-be81-3961fab3e8b8-kube-api-access-h9qz5" (OuterVolumeSpecName: "kube-api-access-h9qz5") pod "20d81564-431d-40c5-be81-3961fab3e8b8" (UID: "20d81564-431d-40c5-be81-3961fab3e8b8"). InnerVolumeSpecName "kube-api-access-h9qz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.197533 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-config-data" (OuterVolumeSpecName: "config-data") pod "20d81564-431d-40c5-be81-3961fab3e8b8" (UID: "20d81564-431d-40c5-be81-3961fab3e8b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.198624 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20d81564-431d-40c5-be81-3961fab3e8b8" (UID: "20d81564-431d-40c5-be81-3961fab3e8b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.271325 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.271542 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9qz5\" (UniqueName: \"kubernetes.io/projected/20d81564-431d-40c5-be81-3961fab3e8b8-kube-api-access-h9qz5\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.271618 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.271705 4925 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d81564-431d-40c5-be81-3961fab3e8b8-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.271788 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d81564-431d-40c5-be81-3961fab3e8b8-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.322557 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.676793 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06f0a3f9-6761-457e-aeda-efcf1d326211" path="/var/lib/kubelet/pods/06f0a3f9-6761-457e-aeda-efcf1d326211/volumes" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.786828 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb1481b0-f9a0-4094-84da-002dfab54a82","Type":"ContainerStarted","Data":"fc7e59e048c4be8c5179fedc778f53800b55d0a975b946e1d548e65ff14a385b"} Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.788664 4925 generic.go:334] "Generic (PLEG): container finished" podID="66bbba42-9e45-446e-8042-a428a6269d08" containerID="904e9cf8bf2d427ca1214200e077aff4370afbced0959f9f89e43ff54f630981" exitCode=0 Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.788764 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xtzng" event={"ID":"66bbba42-9e45-446e-8042-a428a6269d08","Type":"ContainerDied","Data":"904e9cf8bf2d427ca1214200e077aff4370afbced0959f9f89e43ff54f630981"} Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.790066 4925 generic.go:334] "Generic (PLEG): container finished" podID="15e0ab2c-a590-4b39-af8b-a055a29f01c0" containerID="18f734283bcd663a6e17b14ab37d7800408cddea2006d707687829dfdf661294" exitCode=0 Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.790205 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gz8kp" event={"ID":"15e0ab2c-a590-4b39-af8b-a055a29f01c0","Type":"ContainerDied","Data":"18f734283bcd663a6e17b14ab37d7800408cddea2006d707687829dfdf661294"} Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.791517 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xxnvr" event={"ID":"20d81564-431d-40c5-be81-3961fab3e8b8","Type":"ContainerDied","Data":"a792f477593bddbc74ca0ef3fe6ba2d064533ff7c172e35cd83548eefc9bbaec"} Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.791545 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a792f477593bddbc74ca0ef3fe6ba2d064533ff7c172e35cd83548eefc9bbaec" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.791633 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xxnvr" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.992832 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-596c688466-nwnv5"] Feb 02 11:19:28 crc kubenswrapper[4925]: E0202 11:19:28.993281 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d81564-431d-40c5-be81-3961fab3e8b8" containerName="placement-db-sync" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.993300 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d81564-431d-40c5-be81-3961fab3e8b8" containerName="placement-db-sync" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.993500 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d81564-431d-40c5-be81-3961fab3e8b8" containerName="placement-db-sync" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.994539 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.996458 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.996620 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.996717 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-k25b6" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.997229 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 02 11:19:28 crc kubenswrapper[4925]: I0202 11:19:28.997285 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.011593 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-596c688466-nwnv5"] Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.088671 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b44970-d770-46b9-9c10-a8ec03d3bbaf-public-tls-certs\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.088935 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b44970-d770-46b9-9c10-a8ec03d3bbaf-combined-ca-bundle\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.089157 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b44970-d770-46b9-9c10-a8ec03d3bbaf-internal-tls-certs\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.089203 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg75g\" (UniqueName: \"kubernetes.io/projected/97b44970-d770-46b9-9c10-a8ec03d3bbaf-kube-api-access-zg75g\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.089251 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97b44970-d770-46b9-9c10-a8ec03d3bbaf-scripts\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.089330 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b44970-d770-46b9-9c10-a8ec03d3bbaf-config-data\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.089401 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97b44970-d770-46b9-9c10-a8ec03d3bbaf-logs\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.190385 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b44970-d770-46b9-9c10-a8ec03d3bbaf-config-data\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.190728 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97b44970-d770-46b9-9c10-a8ec03d3bbaf-logs\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.190765 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b44970-d770-46b9-9c10-a8ec03d3bbaf-public-tls-certs\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.190834 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b44970-d770-46b9-9c10-a8ec03d3bbaf-combined-ca-bundle\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.190884 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b44970-d770-46b9-9c10-a8ec03d3bbaf-internal-tls-certs\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.190905 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg75g\" (UniqueName: \"kubernetes.io/projected/97b44970-d770-46b9-9c10-a8ec03d3bbaf-kube-api-access-zg75g\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.190921 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97b44970-d770-46b9-9c10-a8ec03d3bbaf-scripts\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.192593 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97b44970-d770-46b9-9c10-a8ec03d3bbaf-logs\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.196356 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b44970-d770-46b9-9c10-a8ec03d3bbaf-combined-ca-bundle\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.197994 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b44970-d770-46b9-9c10-a8ec03d3bbaf-internal-tls-certs\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.198259 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b44970-d770-46b9-9c10-a8ec03d3bbaf-public-tls-certs\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.199458 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97b44970-d770-46b9-9c10-a8ec03d3bbaf-scripts\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.218446 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg75g\" (UniqueName: \"kubernetes.io/projected/97b44970-d770-46b9-9c10-a8ec03d3bbaf-kube-api-access-zg75g\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.218983 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b44970-d770-46b9-9c10-a8ec03d3bbaf-config-data\") pod \"placement-596c688466-nwnv5\" (UID: \"97b44970-d770-46b9-9c10-a8ec03d3bbaf\") " pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.310113 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.775836 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-596c688466-nwnv5"] Feb 02 11:19:29 crc kubenswrapper[4925]: W0202 11:19:29.780824 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97b44970_d770_46b9_9c10_a8ec03d3bbaf.slice/crio-aed27bb2d195d3d96b1db20aa543c7bf9e567a428b1e552439663aed535e3211 WatchSource:0}: Error finding container aed27bb2d195d3d96b1db20aa543c7bf9e567a428b1e552439663aed535e3211: Status 404 returned error can't find the container with id aed27bb2d195d3d96b1db20aa543c7bf9e567a428b1e552439663aed535e3211 Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.800712 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb1481b0-f9a0-4094-84da-002dfab54a82","Type":"ContainerStarted","Data":"23676cbb3ee982d4f8e83476f0cf8428b03205f1ac3e7cb99c173ef527c5c0d4"} Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.802303 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-596c688466-nwnv5" event={"ID":"97b44970-d770-46b9-9c10-a8ec03d3bbaf","Type":"ContainerStarted","Data":"aed27bb2d195d3d96b1db20aa543c7bf9e567a428b1e552439663aed535e3211"} Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.867750 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-79pz8"] Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.869301 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-79pz8" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.879378 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-79pz8"] Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.907927 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c13cad78-20d5-4520-81f0-3677e98a64c5-operator-scripts\") pod \"nova-api-db-create-79pz8\" (UID: \"c13cad78-20d5-4520-81f0-3677e98a64c5\") " pod="openstack/nova-api-db-create-79pz8" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.907976 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgxpm\" (UniqueName: \"kubernetes.io/projected/c13cad78-20d5-4520-81f0-3677e98a64c5-kube-api-access-tgxpm\") pod \"nova-api-db-create-79pz8\" (UID: \"c13cad78-20d5-4520-81f0-3677e98a64c5\") " pod="openstack/nova-api-db-create-79pz8" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.971036 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-22lwh"] Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.972132 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-22lwh" Feb 02 11:19:29 crc kubenswrapper[4925]: I0202 11:19:29.983431 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-22lwh"] Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:29.998926 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8005-account-create-update-49drq"] Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.000273 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8005-account-create-update-49drq" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.006046 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.012475 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vrp5\" (UniqueName: \"kubernetes.io/projected/59663ecf-67bb-464d-a56a-0246eee949cc-kube-api-access-5vrp5\") pod \"nova-cell0-db-create-22lwh\" (UID: \"59663ecf-67bb-464d-a56a-0246eee949cc\") " pod="openstack/nova-cell0-db-create-22lwh" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.012519 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c13cad78-20d5-4520-81f0-3677e98a64c5-operator-scripts\") pod \"nova-api-db-create-79pz8\" (UID: \"c13cad78-20d5-4520-81f0-3677e98a64c5\") " pod="openstack/nova-api-db-create-79pz8" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.012549 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgxpm\" (UniqueName: \"kubernetes.io/projected/c13cad78-20d5-4520-81f0-3677e98a64c5-kube-api-access-tgxpm\") pod \"nova-api-db-create-79pz8\" (UID: \"c13cad78-20d5-4520-81f0-3677e98a64c5\") " pod="openstack/nova-api-db-create-79pz8" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.012577 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59663ecf-67bb-464d-a56a-0246eee949cc-operator-scripts\") pod \"nova-cell0-db-create-22lwh\" (UID: \"59663ecf-67bb-464d-a56a-0246eee949cc\") " pod="openstack/nova-cell0-db-create-22lwh" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.013480 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c13cad78-20d5-4520-81f0-3677e98a64c5-operator-scripts\") pod \"nova-api-db-create-79pz8\" (UID: \"c13cad78-20d5-4520-81f0-3677e98a64c5\") " pod="openstack/nova-api-db-create-79pz8" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.038493 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8005-account-create-update-49drq"] Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.055562 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgxpm\" (UniqueName: \"kubernetes.io/projected/c13cad78-20d5-4520-81f0-3677e98a64c5-kube-api-access-tgxpm\") pod \"nova-api-db-create-79pz8\" (UID: \"c13cad78-20d5-4520-81f0-3677e98a64c5\") " pod="openstack/nova-api-db-create-79pz8" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.081282 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-8wd28"] Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.087884 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8wd28" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.100666 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8wd28"] Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.113598 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mckbb\" (UniqueName: \"kubernetes.io/projected/36eda75d-2be8-431a-9562-95965aa5e22d-kube-api-access-mckbb\") pod \"nova-cell1-db-create-8wd28\" (UID: \"36eda75d-2be8-431a-9562-95965aa5e22d\") " pod="openstack/nova-cell1-db-create-8wd28" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.113648 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59663ecf-67bb-464d-a56a-0246eee949cc-operator-scripts\") pod \"nova-cell0-db-create-22lwh\" (UID: \"59663ecf-67bb-464d-a56a-0246eee949cc\") " pod="openstack/nova-cell0-db-create-22lwh" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.113673 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36eda75d-2be8-431a-9562-95965aa5e22d-operator-scripts\") pod \"nova-cell1-db-create-8wd28\" (UID: \"36eda75d-2be8-431a-9562-95965aa5e22d\") " pod="openstack/nova-cell1-db-create-8wd28" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.113771 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6bkh\" (UniqueName: \"kubernetes.io/projected/724c56e3-b799-47d7-9374-a06c2d5cd6f9-kube-api-access-x6bkh\") pod \"nova-api-8005-account-create-update-49drq\" (UID: \"724c56e3-b799-47d7-9374-a06c2d5cd6f9\") " pod="openstack/nova-api-8005-account-create-update-49drq" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.113812 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vrp5\" (UniqueName: \"kubernetes.io/projected/59663ecf-67bb-464d-a56a-0246eee949cc-kube-api-access-5vrp5\") pod \"nova-cell0-db-create-22lwh\" (UID: \"59663ecf-67bb-464d-a56a-0246eee949cc\") " pod="openstack/nova-cell0-db-create-22lwh" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.113829 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/724c56e3-b799-47d7-9374-a06c2d5cd6f9-operator-scripts\") pod \"nova-api-8005-account-create-update-49drq\" (UID: \"724c56e3-b799-47d7-9374-a06c2d5cd6f9\") " pod="openstack/nova-api-8005-account-create-update-49drq" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.114477 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59663ecf-67bb-464d-a56a-0246eee949cc-operator-scripts\") pod \"nova-cell0-db-create-22lwh\" (UID: \"59663ecf-67bb-464d-a56a-0246eee949cc\") " pod="openstack/nova-cell0-db-create-22lwh" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.140649 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vrp5\" (UniqueName: \"kubernetes.io/projected/59663ecf-67bb-464d-a56a-0246eee949cc-kube-api-access-5vrp5\") pod \"nova-cell0-db-create-22lwh\" (UID: \"59663ecf-67bb-464d-a56a-0246eee949cc\") " pod="openstack/nova-cell0-db-create-22lwh" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.196646 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-269f-account-create-update-dncxz"] Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.198424 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-269f-account-create-update-dncxz" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.200662 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.208926 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-269f-account-create-update-dncxz"] Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.216774 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/724c56e3-b799-47d7-9374-a06c2d5cd6f9-operator-scripts\") pod \"nova-api-8005-account-create-update-49drq\" (UID: \"724c56e3-b799-47d7-9374-a06c2d5cd6f9\") " pod="openstack/nova-api-8005-account-create-update-49drq" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.216854 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mckbb\" (UniqueName: \"kubernetes.io/projected/36eda75d-2be8-431a-9562-95965aa5e22d-kube-api-access-mckbb\") pod \"nova-cell1-db-create-8wd28\" (UID: \"36eda75d-2be8-431a-9562-95965aa5e22d\") " pod="openstack/nova-cell1-db-create-8wd28" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.216883 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36eda75d-2be8-431a-9562-95965aa5e22d-operator-scripts\") pod \"nova-cell1-db-create-8wd28\" (UID: \"36eda75d-2be8-431a-9562-95965aa5e22d\") " pod="openstack/nova-cell1-db-create-8wd28" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.218014 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/724c56e3-b799-47d7-9374-a06c2d5cd6f9-operator-scripts\") pod \"nova-api-8005-account-create-update-49drq\" (UID: \"724c56e3-b799-47d7-9374-a06c2d5cd6f9\") " pod="openstack/nova-api-8005-account-create-update-49drq" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.218731 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36eda75d-2be8-431a-9562-95965aa5e22d-operator-scripts\") pod \"nova-cell1-db-create-8wd28\" (UID: \"36eda75d-2be8-431a-9562-95965aa5e22d\") " pod="openstack/nova-cell1-db-create-8wd28" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.219306 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6bkh\" (UniqueName: \"kubernetes.io/projected/724c56e3-b799-47d7-9374-a06c2d5cd6f9-kube-api-access-x6bkh\") pod \"nova-api-8005-account-create-update-49drq\" (UID: \"724c56e3-b799-47d7-9374-a06c2d5cd6f9\") " pod="openstack/nova-api-8005-account-create-update-49drq" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.237065 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mckbb\" (UniqueName: \"kubernetes.io/projected/36eda75d-2be8-431a-9562-95965aa5e22d-kube-api-access-mckbb\") pod \"nova-cell1-db-create-8wd28\" (UID: \"36eda75d-2be8-431a-9562-95965aa5e22d\") " pod="openstack/nova-cell1-db-create-8wd28" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.245413 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6bkh\" (UniqueName: \"kubernetes.io/projected/724c56e3-b799-47d7-9374-a06c2d5cd6f9-kube-api-access-x6bkh\") pod \"nova-api-8005-account-create-update-49drq\" (UID: \"724c56e3-b799-47d7-9374-a06c2d5cd6f9\") " pod="openstack/nova-api-8005-account-create-update-49drq" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.254335 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-79pz8" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.279199 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gz8kp" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.319998 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx9wg\" (UniqueName: \"kubernetes.io/projected/15e0ab2c-a590-4b39-af8b-a055a29f01c0-kube-api-access-dx9wg\") pod \"15e0ab2c-a590-4b39-af8b-a055a29f01c0\" (UID: \"15e0ab2c-a590-4b39-af8b-a055a29f01c0\") " Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.320123 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/15e0ab2c-a590-4b39-af8b-a055a29f01c0-db-sync-config-data\") pod \"15e0ab2c-a590-4b39-af8b-a055a29f01c0\" (UID: \"15e0ab2c-a590-4b39-af8b-a055a29f01c0\") " Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.320338 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e0ab2c-a590-4b39-af8b-a055a29f01c0-combined-ca-bundle\") pod \"15e0ab2c-a590-4b39-af8b-a055a29f01c0\" (UID: \"15e0ab2c-a590-4b39-af8b-a055a29f01c0\") " Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.320647 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwht8\" (UniqueName: \"kubernetes.io/projected/95edd3b0-5a13-4845-bfbf-5e8572214a57-kube-api-access-cwht8\") pod \"nova-cell0-269f-account-create-update-dncxz\" (UID: \"95edd3b0-5a13-4845-bfbf-5e8572214a57\") " pod="openstack/nova-cell0-269f-account-create-update-dncxz" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.320721 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95edd3b0-5a13-4845-bfbf-5e8572214a57-operator-scripts\") pod \"nova-cell0-269f-account-create-update-dncxz\" (UID: \"95edd3b0-5a13-4845-bfbf-5e8572214a57\") " pod="openstack/nova-cell0-269f-account-create-update-dncxz" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.325791 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e0ab2c-a590-4b39-af8b-a055a29f01c0-kube-api-access-dx9wg" (OuterVolumeSpecName: "kube-api-access-dx9wg") pod "15e0ab2c-a590-4b39-af8b-a055a29f01c0" (UID: "15e0ab2c-a590-4b39-af8b-a055a29f01c0"). InnerVolumeSpecName "kube-api-access-dx9wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.326019 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e0ab2c-a590-4b39-af8b-a055a29f01c0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "15e0ab2c-a590-4b39-af8b-a055a29f01c0" (UID: "15e0ab2c-a590-4b39-af8b-a055a29f01c0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.362587 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7e77-account-create-update-qkg5j"] Feb 02 11:19:30 crc kubenswrapper[4925]: E0202 11:19:30.365247 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e0ab2c-a590-4b39-af8b-a055a29f01c0" containerName="barbican-db-sync" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.365277 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e0ab2c-a590-4b39-af8b-a055a29f01c0" containerName="barbican-db-sync" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.365610 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e0ab2c-a590-4b39-af8b-a055a29f01c0" containerName="barbican-db-sync" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.366348 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7e77-account-create-update-qkg5j" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.369896 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.374622 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e0ab2c-a590-4b39-af8b-a055a29f01c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15e0ab2c-a590-4b39-af8b-a055a29f01c0" (UID: "15e0ab2c-a590-4b39-af8b-a055a29f01c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.386207 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-22lwh" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.386610 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7e77-account-create-update-qkg5j"] Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.394584 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8005-account-create-update-49drq" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.422189 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwht8\" (UniqueName: \"kubernetes.io/projected/95edd3b0-5a13-4845-bfbf-5e8572214a57-kube-api-access-cwht8\") pod \"nova-cell0-269f-account-create-update-dncxz\" (UID: \"95edd3b0-5a13-4845-bfbf-5e8572214a57\") " pod="openstack/nova-cell0-269f-account-create-update-dncxz" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.423952 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8wd28" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.451707 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwht8\" (UniqueName: \"kubernetes.io/projected/95edd3b0-5a13-4845-bfbf-5e8572214a57-kube-api-access-cwht8\") pod \"nova-cell0-269f-account-create-update-dncxz\" (UID: \"95edd3b0-5a13-4845-bfbf-5e8572214a57\") " pod="openstack/nova-cell0-269f-account-create-update-dncxz" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.483438 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95edd3b0-5a13-4845-bfbf-5e8572214a57-operator-scripts\") pod \"nova-cell0-269f-account-create-update-dncxz\" (UID: \"95edd3b0-5a13-4845-bfbf-5e8572214a57\") " pod="openstack/nova-cell0-269f-account-create-update-dncxz" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.483498 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bt82\" (UniqueName: \"kubernetes.io/projected/d99078da-9bce-4614-a4a8-e78da62b3f39-kube-api-access-8bt82\") pod \"nova-cell1-7e77-account-create-update-qkg5j\" (UID: \"d99078da-9bce-4614-a4a8-e78da62b3f39\") " pod="openstack/nova-cell1-7e77-account-create-update-qkg5j" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.483638 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d99078da-9bce-4614-a4a8-e78da62b3f39-operator-scripts\") pod \"nova-cell1-7e77-account-create-update-qkg5j\" (UID: \"d99078da-9bce-4614-a4a8-e78da62b3f39\") " pod="openstack/nova-cell1-7e77-account-create-update-qkg5j" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.483979 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e0ab2c-a590-4b39-af8b-a055a29f01c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.484005 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx9wg\" (UniqueName: \"kubernetes.io/projected/15e0ab2c-a590-4b39-af8b-a055a29f01c0-kube-api-access-dx9wg\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.484020 4925 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/15e0ab2c-a590-4b39-af8b-a055a29f01c0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.485357 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95edd3b0-5a13-4845-bfbf-5e8572214a57-operator-scripts\") pod \"nova-cell0-269f-account-create-update-dncxz\" (UID: \"95edd3b0-5a13-4845-bfbf-5e8572214a57\") " pod="openstack/nova-cell0-269f-account-create-update-dncxz" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.566543 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-269f-account-create-update-dncxz" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.590573 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d99078da-9bce-4614-a4a8-e78da62b3f39-operator-scripts\") pod \"nova-cell1-7e77-account-create-update-qkg5j\" (UID: \"d99078da-9bce-4614-a4a8-e78da62b3f39\") " pod="openstack/nova-cell1-7e77-account-create-update-qkg5j" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.590753 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bt82\" (UniqueName: \"kubernetes.io/projected/d99078da-9bce-4614-a4a8-e78da62b3f39-kube-api-access-8bt82\") pod \"nova-cell1-7e77-account-create-update-qkg5j\" (UID: \"d99078da-9bce-4614-a4a8-e78da62b3f39\") " pod="openstack/nova-cell1-7e77-account-create-update-qkg5j" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.591953 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d99078da-9bce-4614-a4a8-e78da62b3f39-operator-scripts\") pod \"nova-cell1-7e77-account-create-update-qkg5j\" (UID: \"d99078da-9bce-4614-a4a8-e78da62b3f39\") " pod="openstack/nova-cell1-7e77-account-create-update-qkg5j" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.610326 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bt82\" (UniqueName: \"kubernetes.io/projected/d99078da-9bce-4614-a4a8-e78da62b3f39-kube-api-access-8bt82\") pod \"nova-cell1-7e77-account-create-update-qkg5j\" (UID: \"d99078da-9bce-4614-a4a8-e78da62b3f39\") " pod="openstack/nova-cell1-7e77-account-create-update-qkg5j" Feb 02 11:19:30 crc kubenswrapper[4925]: E0202 11:19:30.666201 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-d8tqm" podUID="b26690f1-3d10-4ef3-a16a-7c33dc1c62c0" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.706657 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7e77-account-create-update-qkg5j" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.880495 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-596c688466-nwnv5" event={"ID":"97b44970-d770-46b9-9c10-a8ec03d3bbaf","Type":"ContainerStarted","Data":"9269a86d6602fa33445322e3ed680684627d215de75a8de3a9b000c70dbc0535"} Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.880856 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-596c688466-nwnv5" event={"ID":"97b44970-d770-46b9-9c10-a8ec03d3bbaf","Type":"ContainerStarted","Data":"6189caa9520222ff4595f5ad79db518e4d1e1553137352386d3fc007d1dc944e"} Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.881821 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.881865 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-596c688466-nwnv5" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.895586 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gz8kp" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.895724 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gz8kp" event={"ID":"15e0ab2c-a590-4b39-af8b-a055a29f01c0","Type":"ContainerDied","Data":"e34e7de86a170015eaf4374f4a0e2cf889d3075cf5ca38aa18ad89c3c74cce13"} Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.895767 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e34e7de86a170015eaf4374f4a0e2cf889d3075cf5ca38aa18ad89c3c74cce13" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.917525 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb1481b0-f9a0-4094-84da-002dfab54a82","Type":"ContainerStarted","Data":"4b564b3f80bfa5dadfa6f4d00e22f59e307c13aafc268b6a4376782ae47d5418"} Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.948012 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-596c688466-nwnv5" podStartSLOduration=2.947988417 podStartE2EDuration="2.947988417s" podCreationTimestamp="2026-02-02 11:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:19:30.908487594 +0000 UTC m=+1347.912736576" watchObservedRunningTime="2026-02-02 11:19:30.947988417 +0000 UTC m=+1347.952237389" Feb 02 11:19:30 crc kubenswrapper[4925]: I0202 11:19:30.970718 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-79pz8"] Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.035053 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7f574dbb79-fc5vn"] Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.038985 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7f574dbb79-fc5vn" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.044656 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xtzng" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.050501 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.050748 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.050882 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tcgcl" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.083231 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7f574dbb79-fc5vn"] Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.101625 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-db-sync-config-data\") pod \"66bbba42-9e45-446e-8042-a428a6269d08\" (UID: \"66bbba42-9e45-446e-8042-a428a6269d08\") " Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.101711 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6648\" (UniqueName: \"kubernetes.io/projected/66bbba42-9e45-446e-8042-a428a6269d08-kube-api-access-n6648\") pod \"66bbba42-9e45-446e-8042-a428a6269d08\" (UID: \"66bbba42-9e45-446e-8042-a428a6269d08\") " Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.101774 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-combined-ca-bundle\") pod \"66bbba42-9e45-446e-8042-a428a6269d08\" (UID: \"66bbba42-9e45-446e-8042-a428a6269d08\") " Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.101872 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-config-data\") pod \"66bbba42-9e45-446e-8042-a428a6269d08\" (UID: \"66bbba42-9e45-446e-8042-a428a6269d08\") " Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.102220 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89t46\" (UniqueName: \"kubernetes.io/projected/604a4d9b-a323-464c-b7f4-e41503e992f4-kube-api-access-89t46\") pod \"barbican-worker-7f574dbb79-fc5vn\" (UID: \"604a4d9b-a323-464c-b7f4-e41503e992f4\") " pod="openstack/barbican-worker-7f574dbb79-fc5vn" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.102250 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/604a4d9b-a323-464c-b7f4-e41503e992f4-config-data-custom\") pod \"barbican-worker-7f574dbb79-fc5vn\" (UID: \"604a4d9b-a323-464c-b7f4-e41503e992f4\") " pod="openstack/barbican-worker-7f574dbb79-fc5vn" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.102412 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/604a4d9b-a323-464c-b7f4-e41503e992f4-logs\") pod \"barbican-worker-7f574dbb79-fc5vn\" (UID: \"604a4d9b-a323-464c-b7f4-e41503e992f4\") " pod="openstack/barbican-worker-7f574dbb79-fc5vn" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.102457 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604a4d9b-a323-464c-b7f4-e41503e992f4-combined-ca-bundle\") pod \"barbican-worker-7f574dbb79-fc5vn\" (UID: \"604a4d9b-a323-464c-b7f4-e41503e992f4\") " pod="openstack/barbican-worker-7f574dbb79-fc5vn" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.102497 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604a4d9b-a323-464c-b7f4-e41503e992f4-config-data\") pod \"barbican-worker-7f574dbb79-fc5vn\" (UID: \"604a4d9b-a323-464c-b7f4-e41503e992f4\") " pod="openstack/barbican-worker-7f574dbb79-fc5vn" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.143663 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66bbba42-9e45-446e-8042-a428a6269d08-kube-api-access-n6648" (OuterVolumeSpecName: "kube-api-access-n6648") pod "66bbba42-9e45-446e-8042-a428a6269d08" (UID: "66bbba42-9e45-446e-8042-a428a6269d08"). InnerVolumeSpecName "kube-api-access-n6648". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.155038 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "66bbba42-9e45-446e-8042-a428a6269d08" (UID: "66bbba42-9e45-446e-8042-a428a6269d08"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.195681 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4"] Feb 02 11:19:31 crc kubenswrapper[4925]: E0202 11:19:31.196238 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66bbba42-9e45-446e-8042-a428a6269d08" containerName="glance-db-sync" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.196258 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="66bbba42-9e45-446e-8042-a428a6269d08" containerName="glance-db-sync" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.196507 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="66bbba42-9e45-446e-8042-a428a6269d08" containerName="glance-db-sync" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.203427 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/604a4d9b-a323-464c-b7f4-e41503e992f4-logs\") pod \"barbican-worker-7f574dbb79-fc5vn\" (UID: \"604a4d9b-a323-464c-b7f4-e41503e992f4\") " pod="openstack/barbican-worker-7f574dbb79-fc5vn" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.203564 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604a4d9b-a323-464c-b7f4-e41503e992f4-combined-ca-bundle\") pod \"barbican-worker-7f574dbb79-fc5vn\" (UID: \"604a4d9b-a323-464c-b7f4-e41503e992f4\") " pod="openstack/barbican-worker-7f574dbb79-fc5vn" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.203654 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604a4d9b-a323-464c-b7f4-e41503e992f4-config-data\") pod \"barbican-worker-7f574dbb79-fc5vn\" (UID: \"604a4d9b-a323-464c-b7f4-e41503e992f4\") " pod="openstack/barbican-worker-7f574dbb79-fc5vn" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.203744 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89t46\" (UniqueName: \"kubernetes.io/projected/604a4d9b-a323-464c-b7f4-e41503e992f4-kube-api-access-89t46\") pod \"barbican-worker-7f574dbb79-fc5vn\" (UID: \"604a4d9b-a323-464c-b7f4-e41503e992f4\") " pod="openstack/barbican-worker-7f574dbb79-fc5vn" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.203809 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/604a4d9b-a323-464c-b7f4-e41503e992f4-config-data-custom\") pod \"barbican-worker-7f574dbb79-fc5vn\" (UID: \"604a4d9b-a323-464c-b7f4-e41503e992f4\") " pod="openstack/barbican-worker-7f574dbb79-fc5vn" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.203999 4925 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.204134 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6648\" (UniqueName: \"kubernetes.io/projected/66bbba42-9e45-446e-8042-a428a6269d08-kube-api-access-n6648\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.215057 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.215724 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/604a4d9b-a323-464c-b7f4-e41503e992f4-config-data-custom\") pod \"barbican-worker-7f574dbb79-fc5vn\" (UID: \"604a4d9b-a323-464c-b7f4-e41503e992f4\") " pod="openstack/barbican-worker-7f574dbb79-fc5vn" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.216560 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/604a4d9b-a323-464c-b7f4-e41503e992f4-logs\") pod \"barbican-worker-7f574dbb79-fc5vn\" (UID: \"604a4d9b-a323-464c-b7f4-e41503e992f4\") " pod="openstack/barbican-worker-7f574dbb79-fc5vn" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.221445 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c55bf9497-sm6fz"] Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.222758 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604a4d9b-a323-464c-b7f4-e41503e992f4-combined-ca-bundle\") pod \"barbican-worker-7f574dbb79-fc5vn\" (UID: \"604a4d9b-a323-464c-b7f4-e41503e992f4\") " pod="openstack/barbican-worker-7f574dbb79-fc5vn" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.223284 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.225588 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.250981 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4"] Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.256022 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89t46\" (UniqueName: \"kubernetes.io/projected/604a4d9b-a323-464c-b7f4-e41503e992f4-kube-api-access-89t46\") pod \"barbican-worker-7f574dbb79-fc5vn\" (UID: \"604a4d9b-a323-464c-b7f4-e41503e992f4\") " pod="openstack/barbican-worker-7f574dbb79-fc5vn" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.260135 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66bbba42-9e45-446e-8042-a428a6269d08" (UID: "66bbba42-9e45-446e-8042-a428a6269d08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.289431 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604a4d9b-a323-464c-b7f4-e41503e992f4-config-data\") pod \"barbican-worker-7f574dbb79-fc5vn\" (UID: \"604a4d9b-a323-464c-b7f4-e41503e992f4\") " pod="openstack/barbican-worker-7f574dbb79-fc5vn" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.290489 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c55bf9497-sm6fz"] Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.305752 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461effdf-7e6d-47d3-85f8-eac7940d2100-combined-ca-bundle\") pod \"barbican-keystone-listener-6cb85bfdc6-wzdz4\" (UID: \"461effdf-7e6d-47d3-85f8-eac7940d2100\") " pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.305840 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-ovsdbserver-nb\") pod \"dnsmasq-dns-7c55bf9497-sm6fz\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.305891 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/461effdf-7e6d-47d3-85f8-eac7940d2100-logs\") pod \"barbican-keystone-listener-6cb85bfdc6-wzdz4\" (UID: \"461effdf-7e6d-47d3-85f8-eac7940d2100\") " pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.305921 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-config\") pod \"dnsmasq-dns-7c55bf9497-sm6fz\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.305963 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-ovsdbserver-sb\") pod \"dnsmasq-dns-7c55bf9497-sm6fz\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.306031 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj5hs\" (UniqueName: \"kubernetes.io/projected/461effdf-7e6d-47d3-85f8-eac7940d2100-kube-api-access-hj5hs\") pod \"barbican-keystone-listener-6cb85bfdc6-wzdz4\" (UID: \"461effdf-7e6d-47d3-85f8-eac7940d2100\") " pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.306058 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/461effdf-7e6d-47d3-85f8-eac7940d2100-config-data-custom\") pod \"barbican-keystone-listener-6cb85bfdc6-wzdz4\" (UID: \"461effdf-7e6d-47d3-85f8-eac7940d2100\") " pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.306102 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x59c\" (UniqueName: \"kubernetes.io/projected/8580fa03-8cae-4b9f-a001-50a1c87191c3-kube-api-access-2x59c\") pod \"dnsmasq-dns-7c55bf9497-sm6fz\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.306142 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-dns-svc\") pod \"dnsmasq-dns-7c55bf9497-sm6fz\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.306293 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461effdf-7e6d-47d3-85f8-eac7940d2100-config-data\") pod \"barbican-keystone-listener-6cb85bfdc6-wzdz4\" (UID: \"461effdf-7e6d-47d3-85f8-eac7940d2100\") " pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.306430 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.322493 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-config-data" (OuterVolumeSpecName: "config-data") pod "66bbba42-9e45-446e-8042-a428a6269d08" (UID: "66bbba42-9e45-446e-8042-a428a6269d08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.329847 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6d8bbbb77b-gf2lr"] Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.336662 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.342365 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.342541 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d8bbbb77b-gf2lr"] Feb 02 11:19:31 crc kubenswrapper[4925]: W0202 11:19:31.365003 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod724c56e3_b799_47d7_9374_a06c2d5cd6f9.slice/crio-a884336d72ca7dd38fd73a411796714c478f3607f95b3c6a2482d713984c5777 WatchSource:0}: Error finding container a884336d72ca7dd38fd73a411796714c478f3607f95b3c6a2482d713984c5777: Status 404 returned error can't find the container with id a884336d72ca7dd38fd73a411796714c478f3607f95b3c6a2482d713984c5777 Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.377630 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8005-account-create-update-49drq"] Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.407862 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-ovsdbserver-nb\") pod \"dnsmasq-dns-7c55bf9497-sm6fz\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.407924 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/461effdf-7e6d-47d3-85f8-eac7940d2100-logs\") pod \"barbican-keystone-listener-6cb85bfdc6-wzdz4\" (UID: \"461effdf-7e6d-47d3-85f8-eac7940d2100\") " pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.407966 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-logs\") pod \"barbican-api-6d8bbbb77b-gf2lr\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.407990 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-config\") pod \"dnsmasq-dns-7c55bf9497-sm6fz\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.408041 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-ovsdbserver-sb\") pod \"dnsmasq-dns-7c55bf9497-sm6fz\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.408127 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj5hs\" (UniqueName: \"kubernetes.io/projected/461effdf-7e6d-47d3-85f8-eac7940d2100-kube-api-access-hj5hs\") pod \"barbican-keystone-listener-6cb85bfdc6-wzdz4\" (UID: \"461effdf-7e6d-47d3-85f8-eac7940d2100\") " pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.408157 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/461effdf-7e6d-47d3-85f8-eac7940d2100-config-data-custom\") pod \"barbican-keystone-listener-6cb85bfdc6-wzdz4\" (UID: \"461effdf-7e6d-47d3-85f8-eac7940d2100\") " pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.408177 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x59c\" (UniqueName: \"kubernetes.io/projected/8580fa03-8cae-4b9f-a001-50a1c87191c3-kube-api-access-2x59c\") pod \"dnsmasq-dns-7c55bf9497-sm6fz\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.408205 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-config-data\") pod \"barbican-api-6d8bbbb77b-gf2lr\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.408242 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-dns-svc\") pod \"dnsmasq-dns-7c55bf9497-sm6fz\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.408283 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461effdf-7e6d-47d3-85f8-eac7940d2100-config-data\") pod \"barbican-keystone-listener-6cb85bfdc6-wzdz4\" (UID: \"461effdf-7e6d-47d3-85f8-eac7940d2100\") " pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.408308 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-combined-ca-bundle\") pod \"barbican-api-6d8bbbb77b-gf2lr\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.408331 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-config-data-custom\") pod \"barbican-api-6d8bbbb77b-gf2lr\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.408372 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461effdf-7e6d-47d3-85f8-eac7940d2100-combined-ca-bundle\") pod \"barbican-keystone-listener-6cb85bfdc6-wzdz4\" (UID: \"461effdf-7e6d-47d3-85f8-eac7940d2100\") " pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.408405 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g49n\" (UniqueName: \"kubernetes.io/projected/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-kube-api-access-5g49n\") pod \"barbican-api-6d8bbbb77b-gf2lr\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.408468 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66bbba42-9e45-446e-8042-a428a6269d08-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.409536 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-ovsdbserver-nb\") pod \"dnsmasq-dns-7c55bf9497-sm6fz\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.409676 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-config\") pod \"dnsmasq-dns-7c55bf9497-sm6fz\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.410032 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/461effdf-7e6d-47d3-85f8-eac7940d2100-logs\") pod \"barbican-keystone-listener-6cb85bfdc6-wzdz4\" (UID: \"461effdf-7e6d-47d3-85f8-eac7940d2100\") " pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.411284 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7f574dbb79-fc5vn" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.412967 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-dns-svc\") pod \"dnsmasq-dns-7c55bf9497-sm6fz\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.415149 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-ovsdbserver-sb\") pod \"dnsmasq-dns-7c55bf9497-sm6fz\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.416638 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461effdf-7e6d-47d3-85f8-eac7940d2100-combined-ca-bundle\") pod \"barbican-keystone-listener-6cb85bfdc6-wzdz4\" (UID: \"461effdf-7e6d-47d3-85f8-eac7940d2100\") " pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.416663 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/461effdf-7e6d-47d3-85f8-eac7940d2100-config-data-custom\") pod \"barbican-keystone-listener-6cb85bfdc6-wzdz4\" (UID: \"461effdf-7e6d-47d3-85f8-eac7940d2100\") " pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.427989 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461effdf-7e6d-47d3-85f8-eac7940d2100-config-data\") pod \"barbican-keystone-listener-6cb85bfdc6-wzdz4\" (UID: \"461effdf-7e6d-47d3-85f8-eac7940d2100\") " pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.431624 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x59c\" (UniqueName: \"kubernetes.io/projected/8580fa03-8cae-4b9f-a001-50a1c87191c3-kube-api-access-2x59c\") pod \"dnsmasq-dns-7c55bf9497-sm6fz\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.437380 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj5hs\" (UniqueName: \"kubernetes.io/projected/461effdf-7e6d-47d3-85f8-eac7940d2100-kube-api-access-hj5hs\") pod \"barbican-keystone-listener-6cb85bfdc6-wzdz4\" (UID: \"461effdf-7e6d-47d3-85f8-eac7940d2100\") " pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.439808 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-22lwh"] Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.475809 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8wd28"] Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.510944 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-config-data\") pod \"barbican-api-6d8bbbb77b-gf2lr\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.511018 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-combined-ca-bundle\") pod \"barbican-api-6d8bbbb77b-gf2lr\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.511040 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-config-data-custom\") pod \"barbican-api-6d8bbbb77b-gf2lr\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.511108 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g49n\" (UniqueName: \"kubernetes.io/projected/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-kube-api-access-5g49n\") pod \"barbican-api-6d8bbbb77b-gf2lr\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.511160 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-logs\") pod \"barbican-api-6d8bbbb77b-gf2lr\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.511556 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-logs\") pod \"barbican-api-6d8bbbb77b-gf2lr\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.518429 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-config-data\") pod \"barbican-api-6d8bbbb77b-gf2lr\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.528375 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-config-data-custom\") pod \"barbican-api-6d8bbbb77b-gf2lr\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.532644 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-combined-ca-bundle\") pod \"barbican-api-6d8bbbb77b-gf2lr\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.539775 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g49n\" (UniqueName: \"kubernetes.io/projected/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-kube-api-access-5g49n\") pod \"barbican-api-6d8bbbb77b-gf2lr\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.576346 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.582031 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-269f-account-create-update-dncxz"] Feb 02 11:19:31 crc kubenswrapper[4925]: W0202 11:19:31.603386 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95edd3b0_5a13_4845_bfbf_5e8572214a57.slice/crio-eee45a1449c380d9eec05c336dde3a1961657644bb92938f7a22c26625222d3b WatchSource:0}: Error finding container eee45a1449c380d9eec05c336dde3a1961657644bb92938f7a22c26625222d3b: Status 404 returned error can't find the container with id eee45a1449c380d9eec05c336dde3a1961657644bb92938f7a22c26625222d3b Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.620002 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.672965 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:31 crc kubenswrapper[4925]: I0202 11:19:31.707663 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7e77-account-create-update-qkg5j"] Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.077503 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-269f-account-create-update-dncxz" event={"ID":"95edd3b0-5a13-4845-bfbf-5e8572214a57","Type":"ContainerStarted","Data":"eee45a1449c380d9eec05c336dde3a1961657644bb92938f7a22c26625222d3b"} Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.084418 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7f574dbb79-fc5vn"] Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.115304 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-22lwh" event={"ID":"59663ecf-67bb-464d-a56a-0246eee949cc","Type":"ContainerStarted","Data":"f2c35838dba729bd633d016d3c3b224af10fd2886dd91eb749fec22db8f24749"} Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.147460 4925 generic.go:334] "Generic (PLEG): container finished" podID="c13cad78-20d5-4520-81f0-3677e98a64c5" containerID="abc4d3d7435f0b4c9500d1243cb62e79abe1e544f573cc24f7e9c687093995dc" exitCode=0 Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.147911 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-79pz8" event={"ID":"c13cad78-20d5-4520-81f0-3677e98a64c5","Type":"ContainerDied","Data":"abc4d3d7435f0b4c9500d1243cb62e79abe1e544f573cc24f7e9c687093995dc"} Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.148039 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-79pz8" event={"ID":"c13cad78-20d5-4520-81f0-3677e98a64c5","Type":"ContainerStarted","Data":"446eef02de05f5427106c67f3ed06b23acda4699f306c2e436716a73abfb3321"} Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.195245 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8005-account-create-update-49drq" event={"ID":"724c56e3-b799-47d7-9374-a06c2d5cd6f9","Type":"ContainerStarted","Data":"f4531b9b986db1f5008d4506a7efe7089561e36ecc4368f1e2878f3190f1988c"} Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.195290 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8005-account-create-update-49drq" event={"ID":"724c56e3-b799-47d7-9374-a06c2d5cd6f9","Type":"ContainerStarted","Data":"a884336d72ca7dd38fd73a411796714c478f3607f95b3c6a2482d713984c5777"} Feb 02 11:19:32 crc kubenswrapper[4925]: W0202 11:19:32.250679 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod604a4d9b_a323_464c_b7f4_e41503e992f4.slice/crio-85dcf9288b872d7dd7137a3b0603d686b39b38326dce3dbf43d9414309ccd828 WatchSource:0}: Error finding container 85dcf9288b872d7dd7137a3b0603d686b39b38326dce3dbf43d9414309ccd828: Status 404 returned error can't find the container with id 85dcf9288b872d7dd7137a3b0603d686b39b38326dce3dbf43d9414309ccd828 Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.255241 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8wd28" event={"ID":"36eda75d-2be8-431a-9562-95965aa5e22d","Type":"ContainerStarted","Data":"bc4c3791b60f5320c2b765e87089c5d7dd8d9f89260748f35378777a65670c30"} Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.299701 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-8005-account-create-update-49drq" podStartSLOduration=3.299681852 podStartE2EDuration="3.299681852s" podCreationTimestamp="2026-02-02 11:19:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:19:32.294818533 +0000 UTC m=+1349.299067495" watchObservedRunningTime="2026-02-02 11:19:32.299681852 +0000 UTC m=+1349.303930814" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.302761 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7e77-account-create-update-qkg5j" event={"ID":"d99078da-9bce-4614-a4a8-e78da62b3f39","Type":"ContainerStarted","Data":"15e222897b0a013e19a49029c7f7ba6c2566033fa665f33b1326305203d1d861"} Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.324037 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xtzng" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.324725 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xtzng" event={"ID":"66bbba42-9e45-446e-8042-a428a6269d08","Type":"ContainerDied","Data":"f7080af4979bbe0fdfa4609f05f8e06ee91a3d8f6dad83bc3968b34b2df63002"} Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.324790 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7080af4979bbe0fdfa4609f05f8e06ee91a3d8f6dad83bc3968b34b2df63002" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.452065 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d8bbbb77b-gf2lr"] Feb 02 11:19:32 crc kubenswrapper[4925]: W0202 11:19:32.462278 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d8df520_67f7_49ce_9cc1_4a7ff28e60c3.slice/crio-2bea0d32eee9f3aa279b298e4904740a83dd76da5e8a44aff76f0a5c222589b4 WatchSource:0}: Error finding container 2bea0d32eee9f3aa279b298e4904740a83dd76da5e8a44aff76f0a5c222589b4: Status 404 returned error can't find the container with id 2bea0d32eee9f3aa279b298e4904740a83dd76da5e8a44aff76f0a5c222589b4 Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.524929 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c55bf9497-sm6fz"] Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.542664 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699df9757c-qhp2h"] Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.549043 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.579488 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-qhp2h"] Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.626044 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4"] Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.666993 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-ovsdbserver-nb\") pod \"dnsmasq-dns-699df9757c-qhp2h\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.667070 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-dns-svc\") pod \"dnsmasq-dns-699df9757c-qhp2h\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.667132 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-config\") pod \"dnsmasq-dns-699df9757c-qhp2h\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.667175 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvg9f\" (UniqueName: \"kubernetes.io/projected/6453ce80-6db5-49dd-a57a-2ba72b63fad6-kube-api-access-wvg9f\") pod \"dnsmasq-dns-699df9757c-qhp2h\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.667200 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-ovsdbserver-sb\") pod \"dnsmasq-dns-699df9757c-qhp2h\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.768953 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-dns-svc\") pod \"dnsmasq-dns-699df9757c-qhp2h\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.769794 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-config\") pod \"dnsmasq-dns-699df9757c-qhp2h\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.769956 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvg9f\" (UniqueName: \"kubernetes.io/projected/6453ce80-6db5-49dd-a57a-2ba72b63fad6-kube-api-access-wvg9f\") pod \"dnsmasq-dns-699df9757c-qhp2h\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.770005 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-ovsdbserver-sb\") pod \"dnsmasq-dns-699df9757c-qhp2h\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.770104 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-ovsdbserver-nb\") pod \"dnsmasq-dns-699df9757c-qhp2h\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.771014 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-ovsdbserver-nb\") pod \"dnsmasq-dns-699df9757c-qhp2h\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.771626 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-dns-svc\") pod \"dnsmasq-dns-699df9757c-qhp2h\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.772191 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-ovsdbserver-sb\") pod \"dnsmasq-dns-699df9757c-qhp2h\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.772193 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-config\") pod \"dnsmasq-dns-699df9757c-qhp2h\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.794700 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvg9f\" (UniqueName: \"kubernetes.io/projected/6453ce80-6db5-49dd-a57a-2ba72b63fad6-kube-api-access-wvg9f\") pod \"dnsmasq-dns-699df9757c-qhp2h\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.803043 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c55bf9497-sm6fz"] Feb 02 11:19:32 crc kubenswrapper[4925]: I0202 11:19:32.944933 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:33 crc kubenswrapper[4925]: I0202 11:19:33.339605 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" event={"ID":"461effdf-7e6d-47d3-85f8-eac7940d2100","Type":"ContainerStarted","Data":"e15918d248abab0b96442fee7b9da080247b673d3df50f0bfdc5fdaceb7eed6d"} Feb 02 11:19:33 crc kubenswrapper[4925]: I0202 11:19:33.342185 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-22lwh" event={"ID":"59663ecf-67bb-464d-a56a-0246eee949cc","Type":"ContainerStarted","Data":"7535733d48bbea7204459e4bc5a644fce405727796bf303cfe6be37054da990c"} Feb 02 11:19:33 crc kubenswrapper[4925]: I0202 11:19:33.345412 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-269f-account-create-update-dncxz" event={"ID":"95edd3b0-5a13-4845-bfbf-5e8572214a57","Type":"ContainerStarted","Data":"fa986dea4770810b31b37d0dfc9c196b9b4e872e47ecb9f6e05b47e90880642f"} Feb 02 11:19:33 crc kubenswrapper[4925]: I0202 11:19:33.369515 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" event={"ID":"8580fa03-8cae-4b9f-a001-50a1c87191c3","Type":"ContainerStarted","Data":"2c84f95b7522d3dfeb496a8387f1e0afb50b9539da7f39cfbeb0c035f71e6c3f"} Feb 02 11:19:33 crc kubenswrapper[4925]: I0202 11:19:33.374053 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7f574dbb79-fc5vn" event={"ID":"604a4d9b-a323-464c-b7f4-e41503e992f4","Type":"ContainerStarted","Data":"85dcf9288b872d7dd7137a3b0603d686b39b38326dce3dbf43d9414309ccd828"} Feb 02 11:19:33 crc kubenswrapper[4925]: I0202 11:19:33.389117 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-22lwh" podStartSLOduration=4.389097502 podStartE2EDuration="4.389097502s" podCreationTimestamp="2026-02-02 11:19:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:19:33.363678734 +0000 UTC m=+1350.367927696" watchObservedRunningTime="2026-02-02 11:19:33.389097502 +0000 UTC m=+1350.393346474" Feb 02 11:19:33 crc kubenswrapper[4925]: I0202 11:19:33.392329 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8wd28" event={"ID":"36eda75d-2be8-431a-9562-95965aa5e22d","Type":"ContainerStarted","Data":"5e5b59e06dbdb7770342482bb43726801237d195b13a533dccd4490a20883f2b"} Feb 02 11:19:33 crc kubenswrapper[4925]: I0202 11:19:33.400698 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-269f-account-create-update-dncxz" podStartSLOduration=3.40067459 podStartE2EDuration="3.40067459s" podCreationTimestamp="2026-02-02 11:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:19:33.383969415 +0000 UTC m=+1350.388218377" watchObservedRunningTime="2026-02-02 11:19:33.40067459 +0000 UTC m=+1350.404923552" Feb 02 11:19:33 crc kubenswrapper[4925]: I0202 11:19:33.409411 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7e77-account-create-update-qkg5j" event={"ID":"d99078da-9bce-4614-a4a8-e78da62b3f39","Type":"ContainerStarted","Data":"f91c31055d3438fff777db49c5b7837a20608d3c72aa29b206b2569b706065b1"} Feb 02 11:19:33 crc kubenswrapper[4925]: I0202 11:19:33.418675 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" event={"ID":"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3","Type":"ContainerStarted","Data":"877a4ed9aafd56145b6170d1e71cd4f29d9ca482097a5e24c5bb5b016eb9b863"} Feb 02 11:19:33 crc kubenswrapper[4925]: I0202 11:19:33.418723 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" event={"ID":"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3","Type":"ContainerStarted","Data":"2bea0d32eee9f3aa279b298e4904740a83dd76da5e8a44aff76f0a5c222589b4"} Feb 02 11:19:33 crc kubenswrapper[4925]: I0202 11:19:33.426112 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-qhp2h"] Feb 02 11:19:33 crc kubenswrapper[4925]: I0202 11:19:33.435232 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-8wd28" podStartSLOduration=3.435213402 podStartE2EDuration="3.435213402s" podCreationTimestamp="2026-02-02 11:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:19:33.411934371 +0000 UTC m=+1350.416183323" watchObservedRunningTime="2026-02-02 11:19:33.435213402 +0000 UTC m=+1350.439462364" Feb 02 11:19:33 crc kubenswrapper[4925]: I0202 11:19:33.440202 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-7e77-account-create-update-qkg5j" podStartSLOduration=3.440186614 podStartE2EDuration="3.440186614s" podCreationTimestamp="2026-02-02 11:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:19:33.431719538 +0000 UTC m=+1350.435968520" watchObservedRunningTime="2026-02-02 11:19:33.440186614 +0000 UTC m=+1350.444435576" Feb 02 11:19:34 crc kubenswrapper[4925]: W0202 11:19:34.365412 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6453ce80_6db5_49dd_a57a_2ba72b63fad6.slice/crio-c3bfe8a095070dc534fab851bbfc29ba132c8953ac83886bd54af27189d25e70 WatchSource:0}: Error finding container c3bfe8a095070dc534fab851bbfc29ba132c8953ac83886bd54af27189d25e70: Status 404 returned error can't find the container with id c3bfe8a095070dc534fab851bbfc29ba132c8953ac83886bd54af27189d25e70 Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.440622 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" event={"ID":"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3","Type":"ContainerStarted","Data":"19cd1951effddfbcf7a16acfc3830e1ee38f63b28a4f432f154de8244a3eea81"} Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.441789 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.441822 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.445892 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-qhp2h" event={"ID":"6453ce80-6db5-49dd-a57a-2ba72b63fad6","Type":"ContainerStarted","Data":"c3bfe8a095070dc534fab851bbfc29ba132c8953ac83886bd54af27189d25e70"} Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.448227 4925 generic.go:334] "Generic (PLEG): container finished" podID="59663ecf-67bb-464d-a56a-0246eee949cc" containerID="7535733d48bbea7204459e4bc5a644fce405727796bf303cfe6be37054da990c" exitCode=0 Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.448275 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-22lwh" event={"ID":"59663ecf-67bb-464d-a56a-0246eee949cc","Type":"ContainerDied","Data":"7535733d48bbea7204459e4bc5a644fce405727796bf303cfe6be37054da990c"} Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.452201 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-79pz8" event={"ID":"c13cad78-20d5-4520-81f0-3677e98a64c5","Type":"ContainerDied","Data":"446eef02de05f5427106c67f3ed06b23acda4699f306c2e436716a73abfb3321"} Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.452251 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="446eef02de05f5427106c67f3ed06b23acda4699f306c2e436716a73abfb3321" Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.454162 4925 generic.go:334] "Generic (PLEG): container finished" podID="8580fa03-8cae-4b9f-a001-50a1c87191c3" containerID="1a70267e4ab649eee1733501d3a2481d5b3126dbb53b227f86c188f30eae0537" exitCode=0 Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.454227 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" event={"ID":"8580fa03-8cae-4b9f-a001-50a1c87191c3","Type":"ContainerDied","Data":"1a70267e4ab649eee1733501d3a2481d5b3126dbb53b227f86c188f30eae0537"} Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.456637 4925 generic.go:334] "Generic (PLEG): container finished" podID="36eda75d-2be8-431a-9562-95965aa5e22d" containerID="5e5b59e06dbdb7770342482bb43726801237d195b13a533dccd4490a20883f2b" exitCode=0 Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.457518 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8wd28" event={"ID":"36eda75d-2be8-431a-9562-95965aa5e22d","Type":"ContainerDied","Data":"5e5b59e06dbdb7770342482bb43726801237d195b13a533dccd4490a20883f2b"} Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.479050 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" podStartSLOduration=3.479027753 podStartE2EDuration="3.479027753s" podCreationTimestamp="2026-02-02 11:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:19:34.464603509 +0000 UTC m=+1351.468852481" watchObservedRunningTime="2026-02-02 11:19:34.479027753 +0000 UTC m=+1351.483276725" Feb 02 11:19:34 crc kubenswrapper[4925]: E0202 11:19:34.635571 4925 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd99078da_9bce_4614_a4a8_e78da62b3f39.slice/crio-f91c31055d3438fff777db49c5b7837a20608d3c72aa29b206b2569b706065b1.scope\": RecentStats: unable to find data in memory cache]" Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.820896 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-79pz8" Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.929164 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgxpm\" (UniqueName: \"kubernetes.io/projected/c13cad78-20d5-4520-81f0-3677e98a64c5-kube-api-access-tgxpm\") pod \"c13cad78-20d5-4520-81f0-3677e98a64c5\" (UID: \"c13cad78-20d5-4520-81f0-3677e98a64c5\") " Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.929281 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c13cad78-20d5-4520-81f0-3677e98a64c5-operator-scripts\") pod \"c13cad78-20d5-4520-81f0-3677e98a64c5\" (UID: \"c13cad78-20d5-4520-81f0-3677e98a64c5\") " Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.930728 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c13cad78-20d5-4520-81f0-3677e98a64c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c13cad78-20d5-4520-81f0-3677e98a64c5" (UID: "c13cad78-20d5-4520-81f0-3677e98a64c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:19:34 crc kubenswrapper[4925]: I0202 11:19:34.937893 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c13cad78-20d5-4520-81f0-3677e98a64c5-kube-api-access-tgxpm" (OuterVolumeSpecName: "kube-api-access-tgxpm") pod "c13cad78-20d5-4520-81f0-3677e98a64c5" (UID: "c13cad78-20d5-4520-81f0-3677e98a64c5"). InnerVolumeSpecName "kube-api-access-tgxpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.010597 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.032963 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgxpm\" (UniqueName: \"kubernetes.io/projected/c13cad78-20d5-4520-81f0-3677e98a64c5-kube-api-access-tgxpm\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.033001 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c13cad78-20d5-4520-81f0-3677e98a64c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.134020 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-ovsdbserver-sb\") pod \"8580fa03-8cae-4b9f-a001-50a1c87191c3\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.134171 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-dns-svc\") pod \"8580fa03-8cae-4b9f-a001-50a1c87191c3\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.134243 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-config\") pod \"8580fa03-8cae-4b9f-a001-50a1c87191c3\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.134360 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x59c\" (UniqueName: \"kubernetes.io/projected/8580fa03-8cae-4b9f-a001-50a1c87191c3-kube-api-access-2x59c\") pod \"8580fa03-8cae-4b9f-a001-50a1c87191c3\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.134736 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-ovsdbserver-nb\") pod \"8580fa03-8cae-4b9f-a001-50a1c87191c3\" (UID: \"8580fa03-8cae-4b9f-a001-50a1c87191c3\") " Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.139570 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8580fa03-8cae-4b9f-a001-50a1c87191c3-kube-api-access-2x59c" (OuterVolumeSpecName: "kube-api-access-2x59c") pod "8580fa03-8cae-4b9f-a001-50a1c87191c3" (UID: "8580fa03-8cae-4b9f-a001-50a1c87191c3"). InnerVolumeSpecName "kube-api-access-2x59c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.160833 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-config" (OuterVolumeSpecName: "config") pod "8580fa03-8cae-4b9f-a001-50a1c87191c3" (UID: "8580fa03-8cae-4b9f-a001-50a1c87191c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.160844 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8580fa03-8cae-4b9f-a001-50a1c87191c3" (UID: "8580fa03-8cae-4b9f-a001-50a1c87191c3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.162130 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8580fa03-8cae-4b9f-a001-50a1c87191c3" (UID: "8580fa03-8cae-4b9f-a001-50a1c87191c3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.175164 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8580fa03-8cae-4b9f-a001-50a1c87191c3" (UID: "8580fa03-8cae-4b9f-a001-50a1c87191c3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.237466 4925 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.237521 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.237534 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x59c\" (UniqueName: \"kubernetes.io/projected/8580fa03-8cae-4b9f-a001-50a1c87191c3-kube-api-access-2x59c\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.237547 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.237561 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8580fa03-8cae-4b9f-a001-50a1c87191c3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.471807 4925 generic.go:334] "Generic (PLEG): container finished" podID="95edd3b0-5a13-4845-bfbf-5e8572214a57" containerID="fa986dea4770810b31b37d0dfc9c196b9b4e872e47ecb9f6e05b47e90880642f" exitCode=0 Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.472216 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-269f-account-create-update-dncxz" event={"ID":"95edd3b0-5a13-4845-bfbf-5e8572214a57","Type":"ContainerDied","Data":"fa986dea4770810b31b37d0dfc9c196b9b4e872e47ecb9f6e05b47e90880642f"} Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.478673 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" event={"ID":"8580fa03-8cae-4b9f-a001-50a1c87191c3","Type":"ContainerDied","Data":"2c84f95b7522d3dfeb496a8387f1e0afb50b9539da7f39cfbeb0c035f71e6c3f"} Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.478742 4925 scope.go:117] "RemoveContainer" containerID="1a70267e4ab649eee1733501d3a2481d5b3126dbb53b227f86c188f30eae0537" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.478876 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c55bf9497-sm6fz" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.483051 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb1481b0-f9a0-4094-84da-002dfab54a82","Type":"ContainerStarted","Data":"7416df062a65b5d5665bef27dec3bb353f8e4e0c9b2cb34b625f7c6c0a7d19ee"} Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.483355 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerName="ceilometer-central-agent" containerID="cri-o://fc7e59e048c4be8c5179fedc778f53800b55d0a975b946e1d548e65ff14a385b" gracePeriod=30 Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.483733 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.483838 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerName="proxy-httpd" containerID="cri-o://7416df062a65b5d5665bef27dec3bb353f8e4e0c9b2cb34b625f7c6c0a7d19ee" gracePeriod=30 Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.483941 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerName="sg-core" containerID="cri-o://4b564b3f80bfa5dadfa6f4d00e22f59e307c13aafc268b6a4376782ae47d5418" gracePeriod=30 Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.484034 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerName="ceilometer-notification-agent" containerID="cri-o://23676cbb3ee982d4f8e83476f0cf8428b03205f1ac3e7cb99c173ef527c5c0d4" gracePeriod=30 Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.500252 4925 generic.go:334] "Generic (PLEG): container finished" podID="724c56e3-b799-47d7-9374-a06c2d5cd6f9" containerID="f4531b9b986db1f5008d4506a7efe7089561e36ecc4368f1e2878f3190f1988c" exitCode=0 Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.500370 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8005-account-create-update-49drq" event={"ID":"724c56e3-b799-47d7-9374-a06c2d5cd6f9","Type":"ContainerDied","Data":"f4531b9b986db1f5008d4506a7efe7089561e36ecc4368f1e2878f3190f1988c"} Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.514468 4925 generic.go:334] "Generic (PLEG): container finished" podID="d99078da-9bce-4614-a4a8-e78da62b3f39" containerID="f91c31055d3438fff777db49c5b7837a20608d3c72aa29b206b2569b706065b1" exitCode=0 Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.514571 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7e77-account-create-update-qkg5j" event={"ID":"d99078da-9bce-4614-a4a8-e78da62b3f39","Type":"ContainerDied","Data":"f91c31055d3438fff777db49c5b7837a20608d3c72aa29b206b2569b706065b1"} Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.553029 4925 generic.go:334] "Generic (PLEG): container finished" podID="6453ce80-6db5-49dd-a57a-2ba72b63fad6" containerID="cf176bd4f1da098a3ece3acbf01261b429a1b11bfc5d9f863d3ac1e487ff9012" exitCode=0 Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.553203 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-79pz8" Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.555139 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-qhp2h" event={"ID":"6453ce80-6db5-49dd-a57a-2ba72b63fad6","Type":"ContainerDied","Data":"cf176bd4f1da098a3ece3acbf01261b429a1b11bfc5d9f863d3ac1e487ff9012"} Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.562061 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c55bf9497-sm6fz"] Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.578347 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c55bf9497-sm6fz"] Feb 02 11:19:35 crc kubenswrapper[4925]: I0202 11:19:35.590401 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.870624356 podStartE2EDuration="9.590376637s" podCreationTimestamp="2026-02-02 11:19:26 +0000 UTC" firstStartedPulling="2026-02-02 11:19:27.658526235 +0000 UTC m=+1344.662775197" lastFinishedPulling="2026-02-02 11:19:34.378278516 +0000 UTC m=+1351.382527478" observedRunningTime="2026-02-02 11:19:35.570931769 +0000 UTC m=+1352.575180731" watchObservedRunningTime="2026-02-02 11:19:35.590376637 +0000 UTC m=+1352.594625599" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.401821 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-22lwh" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.403008 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8697ffdb94-2bnsl"] Feb 02 11:19:36 crc kubenswrapper[4925]: E0202 11:19:36.403358 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59663ecf-67bb-464d-a56a-0246eee949cc" containerName="mariadb-database-create" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.403374 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="59663ecf-67bb-464d-a56a-0246eee949cc" containerName="mariadb-database-create" Feb 02 11:19:36 crc kubenswrapper[4925]: E0202 11:19:36.403387 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8580fa03-8cae-4b9f-a001-50a1c87191c3" containerName="init" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.403394 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="8580fa03-8cae-4b9f-a001-50a1c87191c3" containerName="init" Feb 02 11:19:36 crc kubenswrapper[4925]: E0202 11:19:36.403418 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c13cad78-20d5-4520-81f0-3677e98a64c5" containerName="mariadb-database-create" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.403423 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13cad78-20d5-4520-81f0-3677e98a64c5" containerName="mariadb-database-create" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.403568 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="c13cad78-20d5-4520-81f0-3677e98a64c5" containerName="mariadb-database-create" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.403581 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="8580fa03-8cae-4b9f-a001-50a1c87191c3" containerName="init" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.403596 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="59663ecf-67bb-464d-a56a-0246eee949cc" containerName="mariadb-database-create" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.404388 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.408684 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.412190 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.430799 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8697ffdb94-2bnsl"] Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.460881 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8wd28" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.564105 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-22lwh" event={"ID":"59663ecf-67bb-464d-a56a-0246eee949cc","Type":"ContainerDied","Data":"f2c35838dba729bd633d016d3c3b224af10fd2886dd91eb749fec22db8f24749"} Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.564147 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2c35838dba729bd633d016d3c3b224af10fd2886dd91eb749fec22db8f24749" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.564217 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-22lwh" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.571583 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59663ecf-67bb-464d-a56a-0246eee949cc-operator-scripts\") pod \"59663ecf-67bb-464d-a56a-0246eee949cc\" (UID: \"59663ecf-67bb-464d-a56a-0246eee949cc\") " Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.571654 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36eda75d-2be8-431a-9562-95965aa5e22d-operator-scripts\") pod \"36eda75d-2be8-431a-9562-95965aa5e22d\" (UID: \"36eda75d-2be8-431a-9562-95965aa5e22d\") " Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.571770 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mckbb\" (UniqueName: \"kubernetes.io/projected/36eda75d-2be8-431a-9562-95965aa5e22d-kube-api-access-mckbb\") pod \"36eda75d-2be8-431a-9562-95965aa5e22d\" (UID: \"36eda75d-2be8-431a-9562-95965aa5e22d\") " Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.571825 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vrp5\" (UniqueName: \"kubernetes.io/projected/59663ecf-67bb-464d-a56a-0246eee949cc-kube-api-access-5vrp5\") pod \"59663ecf-67bb-464d-a56a-0246eee949cc\" (UID: \"59663ecf-67bb-464d-a56a-0246eee949cc\") " Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.572094 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38619cef-521e-4e12-9919-8846bed56c10-logs\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.572126 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38619cef-521e-4e12-9919-8846bed56c10-config-data\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.572147 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38619cef-521e-4e12-9919-8846bed56c10-internal-tls-certs\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.572172 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-655cj\" (UniqueName: \"kubernetes.io/projected/38619cef-521e-4e12-9919-8846bed56c10-kube-api-access-655cj\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.572209 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38619cef-521e-4e12-9919-8846bed56c10-combined-ca-bundle\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.572395 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38619cef-521e-4e12-9919-8846bed56c10-config-data-custom\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.572472 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38619cef-521e-4e12-9919-8846bed56c10-public-tls-certs\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.573325 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59663ecf-67bb-464d-a56a-0246eee949cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59663ecf-67bb-464d-a56a-0246eee949cc" (UID: "59663ecf-67bb-464d-a56a-0246eee949cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.573697 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36eda75d-2be8-431a-9562-95965aa5e22d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36eda75d-2be8-431a-9562-95965aa5e22d" (UID: "36eda75d-2be8-431a-9562-95965aa5e22d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.582735 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59663ecf-67bb-464d-a56a-0246eee949cc-kube-api-access-5vrp5" (OuterVolumeSpecName: "kube-api-access-5vrp5") pod "59663ecf-67bb-464d-a56a-0246eee949cc" (UID: "59663ecf-67bb-464d-a56a-0246eee949cc"). InnerVolumeSpecName "kube-api-access-5vrp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.584788 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36eda75d-2be8-431a-9562-95965aa5e22d-kube-api-access-mckbb" (OuterVolumeSpecName: "kube-api-access-mckbb") pod "36eda75d-2be8-431a-9562-95965aa5e22d" (UID: "36eda75d-2be8-431a-9562-95965aa5e22d"). InnerVolumeSpecName "kube-api-access-mckbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.606217 4925 generic.go:334] "Generic (PLEG): container finished" podID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerID="7416df062a65b5d5665bef27dec3bb353f8e4e0c9b2cb34b625f7c6c0a7d19ee" exitCode=0 Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.606431 4925 generic.go:334] "Generic (PLEG): container finished" podID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerID="4b564b3f80bfa5dadfa6f4d00e22f59e307c13aafc268b6a4376782ae47d5418" exitCode=2 Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.606542 4925 generic.go:334] "Generic (PLEG): container finished" podID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerID="23676cbb3ee982d4f8e83476f0cf8428b03205f1ac3e7cb99c173ef527c5c0d4" exitCode=0 Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.606630 4925 generic.go:334] "Generic (PLEG): container finished" podID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerID="fc7e59e048c4be8c5179fedc778f53800b55d0a975b946e1d548e65ff14a385b" exitCode=0 Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.606441 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb1481b0-f9a0-4094-84da-002dfab54a82","Type":"ContainerDied","Data":"7416df062a65b5d5665bef27dec3bb353f8e4e0c9b2cb34b625f7c6c0a7d19ee"} Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.606816 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb1481b0-f9a0-4094-84da-002dfab54a82","Type":"ContainerDied","Data":"4b564b3f80bfa5dadfa6f4d00e22f59e307c13aafc268b6a4376782ae47d5418"} Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.606890 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb1481b0-f9a0-4094-84da-002dfab54a82","Type":"ContainerDied","Data":"23676cbb3ee982d4f8e83476f0cf8428b03205f1ac3e7cb99c173ef527c5c0d4"} Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.606968 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb1481b0-f9a0-4094-84da-002dfab54a82","Type":"ContainerDied","Data":"fc7e59e048c4be8c5179fedc778f53800b55d0a975b946e1d548e65ff14a385b"} Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.622368 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8wd28" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.623539 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8wd28" event={"ID":"36eda75d-2be8-431a-9562-95965aa5e22d","Type":"ContainerDied","Data":"bc4c3791b60f5320c2b765e87089c5d7dd8d9f89260748f35378777a65670c30"} Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.623578 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc4c3791b60f5320c2b765e87089c5d7dd8d9f89260748f35378777a65670c30" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.674144 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38619cef-521e-4e12-9919-8846bed56c10-config-data-custom\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.686356 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38619cef-521e-4e12-9919-8846bed56c10-public-tls-certs\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.686597 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38619cef-521e-4e12-9919-8846bed56c10-logs\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.686945 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38619cef-521e-4e12-9919-8846bed56c10-logs\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.686996 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38619cef-521e-4e12-9919-8846bed56c10-config-data\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.687027 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38619cef-521e-4e12-9919-8846bed56c10-internal-tls-certs\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.687056 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-655cj\" (UniqueName: \"kubernetes.io/projected/38619cef-521e-4e12-9919-8846bed56c10-kube-api-access-655cj\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.687182 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38619cef-521e-4e12-9919-8846bed56c10-combined-ca-bundle\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.687436 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36eda75d-2be8-431a-9562-95965aa5e22d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.687454 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mckbb\" (UniqueName: \"kubernetes.io/projected/36eda75d-2be8-431a-9562-95965aa5e22d-kube-api-access-mckbb\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.687469 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vrp5\" (UniqueName: \"kubernetes.io/projected/59663ecf-67bb-464d-a56a-0246eee949cc-kube-api-access-5vrp5\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.687481 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59663ecf-67bb-464d-a56a-0246eee949cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.691036 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38619cef-521e-4e12-9919-8846bed56c10-public-tls-certs\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.691215 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38619cef-521e-4e12-9919-8846bed56c10-config-data-custom\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.693837 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38619cef-521e-4e12-9919-8846bed56c10-combined-ca-bundle\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.713152 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38619cef-521e-4e12-9919-8846bed56c10-config-data\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.714100 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38619cef-521e-4e12-9919-8846bed56c10-internal-tls-certs\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.717748 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-655cj\" (UniqueName: \"kubernetes.io/projected/38619cef-521e-4e12-9919-8846bed56c10-kube-api-access-655cj\") pod \"barbican-api-8697ffdb94-2bnsl\" (UID: \"38619cef-521e-4e12-9919-8846bed56c10\") " pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.726801 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8580fa03-8cae-4b9f-a001-50a1c87191c3" path="/var/lib/kubelet/pods/8580fa03-8cae-4b9f-a001-50a1c87191c3/volumes" Feb 02 11:19:36 crc kubenswrapper[4925]: I0202 11:19:36.729758 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.200508 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.208784 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7e77-account-create-update-qkg5j" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.323293 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb1481b0-f9a0-4094-84da-002dfab54a82-log-httpd\") pod \"eb1481b0-f9a0-4094-84da-002dfab54a82\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.323711 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d99078da-9bce-4614-a4a8-e78da62b3f39-operator-scripts\") pod \"d99078da-9bce-4614-a4a8-e78da62b3f39\" (UID: \"d99078da-9bce-4614-a4a8-e78da62b3f39\") " Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.323752 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-combined-ca-bundle\") pod \"eb1481b0-f9a0-4094-84da-002dfab54a82\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.323778 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb1481b0-f9a0-4094-84da-002dfab54a82-run-httpd\") pod \"eb1481b0-f9a0-4094-84da-002dfab54a82\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.323807 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-sg-core-conf-yaml\") pod \"eb1481b0-f9a0-4094-84da-002dfab54a82\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.323897 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bt82\" (UniqueName: \"kubernetes.io/projected/d99078da-9bce-4614-a4a8-e78da62b3f39-kube-api-access-8bt82\") pod \"d99078da-9bce-4614-a4a8-e78da62b3f39\" (UID: \"d99078da-9bce-4614-a4a8-e78da62b3f39\") " Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.323947 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-scripts\") pod \"eb1481b0-f9a0-4094-84da-002dfab54a82\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.323986 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-config-data\") pod \"eb1481b0-f9a0-4094-84da-002dfab54a82\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.324022 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7gww\" (UniqueName: \"kubernetes.io/projected/eb1481b0-f9a0-4094-84da-002dfab54a82-kube-api-access-t7gww\") pod \"eb1481b0-f9a0-4094-84da-002dfab54a82\" (UID: \"eb1481b0-f9a0-4094-84da-002dfab54a82\") " Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.325927 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb1481b0-f9a0-4094-84da-002dfab54a82-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eb1481b0-f9a0-4094-84da-002dfab54a82" (UID: "eb1481b0-f9a0-4094-84da-002dfab54a82"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.327178 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d99078da-9bce-4614-a4a8-e78da62b3f39-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d99078da-9bce-4614-a4a8-e78da62b3f39" (UID: "d99078da-9bce-4614-a4a8-e78da62b3f39"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.335693 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb1481b0-f9a0-4094-84da-002dfab54a82-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eb1481b0-f9a0-4094-84da-002dfab54a82" (UID: "eb1481b0-f9a0-4094-84da-002dfab54a82"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.344584 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb1481b0-f9a0-4094-84da-002dfab54a82-kube-api-access-t7gww" (OuterVolumeSpecName: "kube-api-access-t7gww") pod "eb1481b0-f9a0-4094-84da-002dfab54a82" (UID: "eb1481b0-f9a0-4094-84da-002dfab54a82"). InnerVolumeSpecName "kube-api-access-t7gww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.347926 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d99078da-9bce-4614-a4a8-e78da62b3f39-kube-api-access-8bt82" (OuterVolumeSpecName: "kube-api-access-8bt82") pod "d99078da-9bce-4614-a4a8-e78da62b3f39" (UID: "d99078da-9bce-4614-a4a8-e78da62b3f39"). InnerVolumeSpecName "kube-api-access-8bt82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.349690 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-scripts" (OuterVolumeSpecName: "scripts") pod "eb1481b0-f9a0-4094-84da-002dfab54a82" (UID: "eb1481b0-f9a0-4094-84da-002dfab54a82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.366267 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8005-account-create-update-49drq" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.426498 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bt82\" (UniqueName: \"kubernetes.io/projected/d99078da-9bce-4614-a4a8-e78da62b3f39-kube-api-access-8bt82\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.426525 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.426537 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7gww\" (UniqueName: \"kubernetes.io/projected/eb1481b0-f9a0-4094-84da-002dfab54a82-kube-api-access-t7gww\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.426551 4925 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb1481b0-f9a0-4094-84da-002dfab54a82-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.426577 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d99078da-9bce-4614-a4a8-e78da62b3f39-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.426589 4925 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb1481b0-f9a0-4094-84da-002dfab54a82-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.435223 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eb1481b0-f9a0-4094-84da-002dfab54a82" (UID: "eb1481b0-f9a0-4094-84da-002dfab54a82"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.453862 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-269f-account-create-update-dncxz" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.527241 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/724c56e3-b799-47d7-9374-a06c2d5cd6f9-operator-scripts\") pod \"724c56e3-b799-47d7-9374-a06c2d5cd6f9\" (UID: \"724c56e3-b799-47d7-9374-a06c2d5cd6f9\") " Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.527325 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6bkh\" (UniqueName: \"kubernetes.io/projected/724c56e3-b799-47d7-9374-a06c2d5cd6f9-kube-api-access-x6bkh\") pod \"724c56e3-b799-47d7-9374-a06c2d5cd6f9\" (UID: \"724c56e3-b799-47d7-9374-a06c2d5cd6f9\") " Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.527441 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwht8\" (UniqueName: \"kubernetes.io/projected/95edd3b0-5a13-4845-bfbf-5e8572214a57-kube-api-access-cwht8\") pod \"95edd3b0-5a13-4845-bfbf-5e8572214a57\" (UID: \"95edd3b0-5a13-4845-bfbf-5e8572214a57\") " Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.527508 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95edd3b0-5a13-4845-bfbf-5e8572214a57-operator-scripts\") pod \"95edd3b0-5a13-4845-bfbf-5e8572214a57\" (UID: \"95edd3b0-5a13-4845-bfbf-5e8572214a57\") " Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.528026 4925 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.528989 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95edd3b0-5a13-4845-bfbf-5e8572214a57-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95edd3b0-5a13-4845-bfbf-5e8572214a57" (UID: "95edd3b0-5a13-4845-bfbf-5e8572214a57"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.530286 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/724c56e3-b799-47d7-9374-a06c2d5cd6f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "724c56e3-b799-47d7-9374-a06c2d5cd6f9" (UID: "724c56e3-b799-47d7-9374-a06c2d5cd6f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.531288 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb1481b0-f9a0-4094-84da-002dfab54a82" (UID: "eb1481b0-f9a0-4094-84da-002dfab54a82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.534342 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/724c56e3-b799-47d7-9374-a06c2d5cd6f9-kube-api-access-x6bkh" (OuterVolumeSpecName: "kube-api-access-x6bkh") pod "724c56e3-b799-47d7-9374-a06c2d5cd6f9" (UID: "724c56e3-b799-47d7-9374-a06c2d5cd6f9"). InnerVolumeSpecName "kube-api-access-x6bkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.535411 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95edd3b0-5a13-4845-bfbf-5e8572214a57-kube-api-access-cwht8" (OuterVolumeSpecName: "kube-api-access-cwht8") pod "95edd3b0-5a13-4845-bfbf-5e8572214a57" (UID: "95edd3b0-5a13-4845-bfbf-5e8572214a57"). InnerVolumeSpecName "kube-api-access-cwht8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.560044 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-config-data" (OuterVolumeSpecName: "config-data") pod "eb1481b0-f9a0-4094-84da-002dfab54a82" (UID: "eb1481b0-f9a0-4094-84da-002dfab54a82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.630125 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/724c56e3-b799-47d7-9374-a06c2d5cd6f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.630160 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6bkh\" (UniqueName: \"kubernetes.io/projected/724c56e3-b799-47d7-9374-a06c2d5cd6f9-kube-api-access-x6bkh\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.630176 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.630187 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwht8\" (UniqueName: \"kubernetes.io/projected/95edd3b0-5a13-4845-bfbf-5e8572214a57-kube-api-access-cwht8\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.630199 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95edd3b0-5a13-4845-bfbf-5e8572214a57-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.630210 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1481b0-f9a0-4094-84da-002dfab54a82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.634361 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.634703 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb1481b0-f9a0-4094-84da-002dfab54a82","Type":"ContainerDied","Data":"7e630a5d8db9d582e9cd8f448272dfab8bf3ac7caabfac6257c12f1334cf7682"} Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.634988 4925 scope.go:117] "RemoveContainer" containerID="7416df062a65b5d5665bef27dec3bb353f8e4e0c9b2cb34b625f7c6c0a7d19ee" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.636352 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7f574dbb79-fc5vn" event={"ID":"604a4d9b-a323-464c-b7f4-e41503e992f4","Type":"ContainerStarted","Data":"2e07f376df04c0b3612e1a09605508135442e4269ccb5103adf4ede938dd5735"} Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.636402 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7f574dbb79-fc5vn" event={"ID":"604a4d9b-a323-464c-b7f4-e41503e992f4","Type":"ContainerStarted","Data":"558621a9cab28798c6acdd66602fa534dd6f011eda4b68cc8fed2a0cc61fc12b"} Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.639333 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8005-account-create-update-49drq" event={"ID":"724c56e3-b799-47d7-9374-a06c2d5cd6f9","Type":"ContainerDied","Data":"a884336d72ca7dd38fd73a411796714c478f3607f95b3c6a2482d713984c5777"} Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.639373 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a884336d72ca7dd38fd73a411796714c478f3607f95b3c6a2482d713984c5777" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.639450 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8005-account-create-update-49drq" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.641401 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7e77-account-create-update-qkg5j" event={"ID":"d99078da-9bce-4614-a4a8-e78da62b3f39","Type":"ContainerDied","Data":"15e222897b0a013e19a49029c7f7ba6c2566033fa665f33b1326305203d1d861"} Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.641439 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15e222897b0a013e19a49029c7f7ba6c2566033fa665f33b1326305203d1d861" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.641505 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7e77-account-create-update-qkg5j" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.650878 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8697ffdb94-2bnsl"] Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.665676 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" event={"ID":"461effdf-7e6d-47d3-85f8-eac7940d2100","Type":"ContainerStarted","Data":"9356f02529ba06667685451e7b043b91756a6c24cec4615d4af86aa3c1bad0df"} Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.665731 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" event={"ID":"461effdf-7e6d-47d3-85f8-eac7940d2100","Type":"ContainerStarted","Data":"28217a3e01e5cd69fd65bf4019cd8985f7cc3bb934c51ceab62b4c7a8a840f53"} Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.668148 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-qhp2h" event={"ID":"6453ce80-6db5-49dd-a57a-2ba72b63fad6","Type":"ContainerStarted","Data":"08fde4f4564f48b644d7d0acc9c134548960fcbc3d0efcacf6435520503196be"} Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.670247 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.674720 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-269f-account-create-update-dncxz" event={"ID":"95edd3b0-5a13-4845-bfbf-5e8572214a57","Type":"ContainerDied","Data":"eee45a1449c380d9eec05c336dde3a1961657644bb92938f7a22c26625222d3b"} Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.674773 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee45a1449c380d9eec05c336dde3a1961657644bb92938f7a22c26625222d3b" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.674850 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-269f-account-create-update-dncxz" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.693423 4925 scope.go:117] "RemoveContainer" containerID="4b564b3f80bfa5dadfa6f4d00e22f59e307c13aafc268b6a4376782ae47d5418" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.702471 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7f574dbb79-fc5vn" podStartSLOduration=2.429850008 podStartE2EDuration="6.702450994s" podCreationTimestamp="2026-02-02 11:19:31 +0000 UTC" firstStartedPulling="2026-02-02 11:19:32.254799705 +0000 UTC m=+1349.259048667" lastFinishedPulling="2026-02-02 11:19:36.527400691 +0000 UTC m=+1353.531649653" observedRunningTime="2026-02-02 11:19:37.677459347 +0000 UTC m=+1354.681708309" watchObservedRunningTime="2026-02-02 11:19:37.702450994 +0000 UTC m=+1354.706699956" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.730793 4925 scope.go:117] "RemoveContainer" containerID="23676cbb3ee982d4f8e83476f0cf8428b03205f1ac3e7cb99c173ef527c5c0d4" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.741618 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-699df9757c-qhp2h" podStartSLOduration=5.741595608 podStartE2EDuration="5.741595608s" podCreationTimestamp="2026-02-02 11:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:19:37.69593674 +0000 UTC m=+1354.700185702" watchObservedRunningTime="2026-02-02 11:19:37.741595608 +0000 UTC m=+1354.745844570" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.791786 4925 scope.go:117] "RemoveContainer" containerID="fc7e59e048c4be8c5179fedc778f53800b55d0a975b946e1d548e65ff14a385b" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.901809 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6cb85bfdc6-wzdz4" podStartSLOduration=3.008417761 podStartE2EDuration="6.901787131s" podCreationTimestamp="2026-02-02 11:19:31 +0000 UTC" firstStartedPulling="2026-02-02 11:19:32.636382534 +0000 UTC m=+1349.640631496" lastFinishedPulling="2026-02-02 11:19:36.529751904 +0000 UTC m=+1353.534000866" observedRunningTime="2026-02-02 11:19:37.725700634 +0000 UTC m=+1354.729949596" watchObservedRunningTime="2026-02-02 11:19:37.901787131 +0000 UTC m=+1354.906036103" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.916039 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.941480 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.958680 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:19:37 crc kubenswrapper[4925]: E0202 11:19:37.959096 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d99078da-9bce-4614-a4a8-e78da62b3f39" containerName="mariadb-account-create-update" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.959109 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="d99078da-9bce-4614-a4a8-e78da62b3f39" containerName="mariadb-account-create-update" Feb 02 11:19:37 crc kubenswrapper[4925]: E0202 11:19:37.959117 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="724c56e3-b799-47d7-9374-a06c2d5cd6f9" containerName="mariadb-account-create-update" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.959123 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="724c56e3-b799-47d7-9374-a06c2d5cd6f9" containerName="mariadb-account-create-update" Feb 02 11:19:37 crc kubenswrapper[4925]: E0202 11:19:37.959130 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36eda75d-2be8-431a-9562-95965aa5e22d" containerName="mariadb-database-create" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.959136 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="36eda75d-2be8-431a-9562-95965aa5e22d" containerName="mariadb-database-create" Feb 02 11:19:37 crc kubenswrapper[4925]: E0202 11:19:37.959148 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerName="ceilometer-notification-agent" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.959154 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerName="ceilometer-notification-agent" Feb 02 11:19:37 crc kubenswrapper[4925]: E0202 11:19:37.959162 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerName="proxy-httpd" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.959168 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerName="proxy-httpd" Feb 02 11:19:37 crc kubenswrapper[4925]: E0202 11:19:37.959178 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95edd3b0-5a13-4845-bfbf-5e8572214a57" containerName="mariadb-account-create-update" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.959184 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="95edd3b0-5a13-4845-bfbf-5e8572214a57" containerName="mariadb-account-create-update" Feb 02 11:19:37 crc kubenswrapper[4925]: E0202 11:19:37.959202 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerName="sg-core" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.959207 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerName="sg-core" Feb 02 11:19:37 crc kubenswrapper[4925]: E0202 11:19:37.959215 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerName="ceilometer-central-agent" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.959221 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerName="ceilometer-central-agent" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.959359 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="724c56e3-b799-47d7-9374-a06c2d5cd6f9" containerName="mariadb-account-create-update" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.959369 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="95edd3b0-5a13-4845-bfbf-5e8572214a57" containerName="mariadb-account-create-update" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.959381 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="36eda75d-2be8-431a-9562-95965aa5e22d" containerName="mariadb-database-create" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.959396 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerName="ceilometer-central-agent" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.959405 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerName="proxy-httpd" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.959420 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="d99078da-9bce-4614-a4a8-e78da62b3f39" containerName="mariadb-account-create-update" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.959429 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerName="sg-core" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.959438 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1481b0-f9a0-4094-84da-002dfab54a82" containerName="ceilometer-notification-agent" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.960935 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.967911 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.968214 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 11:19:37 crc kubenswrapper[4925]: I0202 11:19:37.988429 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.042536 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.042596 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c50bd8-7295-4568-b0ea-2f4374bee419-log-httpd\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.042638 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-config-data\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.042663 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.042689 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-scripts\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.042717 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm268\" (UniqueName: \"kubernetes.io/projected/05c50bd8-7295-4568-b0ea-2f4374bee419-kube-api-access-hm268\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.042753 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c50bd8-7295-4568-b0ea-2f4374bee419-run-httpd\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.143981 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c50bd8-7295-4568-b0ea-2f4374bee419-run-httpd\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.144049 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.144099 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c50bd8-7295-4568-b0ea-2f4374bee419-log-httpd\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.144141 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-config-data\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.144180 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.144208 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-scripts\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.144238 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm268\" (UniqueName: \"kubernetes.io/projected/05c50bd8-7295-4568-b0ea-2f4374bee419-kube-api-access-hm268\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.145072 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c50bd8-7295-4568-b0ea-2f4374bee419-log-httpd\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.145384 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c50bd8-7295-4568-b0ea-2f4374bee419-run-httpd\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.148722 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-config-data\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.151942 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.152979 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.153966 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-scripts\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.166945 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm268\" (UniqueName: \"kubernetes.io/projected/05c50bd8-7295-4568-b0ea-2f4374bee419-kube-api-access-hm268\") pod \"ceilometer-0\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.306250 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.692326 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb1481b0-f9a0-4094-84da-002dfab54a82" path="/var/lib/kubelet/pods/eb1481b0-f9a0-4094-84da-002dfab54a82/volumes" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.726579 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8697ffdb94-2bnsl" event={"ID":"38619cef-521e-4e12-9919-8846bed56c10","Type":"ContainerStarted","Data":"973ab3d0589dd08314602b0f07d49dc7016979ef14a33600b09101de725d0b90"} Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.726652 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8697ffdb94-2bnsl" event={"ID":"38619cef-521e-4e12-9919-8846bed56c10","Type":"ContainerStarted","Data":"5558ec36b8bad818c8c891378442fde3b1678e2bac503db79d1d7f0dff6b0192"} Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.726669 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8697ffdb94-2bnsl" event={"ID":"38619cef-521e-4e12-9919-8846bed56c10","Type":"ContainerStarted","Data":"b7afe60d2db05073f9e0f73d95f29022d1c3807aee5de60115a1d377c362bdd2"} Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.726993 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.727732 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.747750 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.771591 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8697ffdb94-2bnsl" podStartSLOduration=2.771575941 podStartE2EDuration="2.771575941s" podCreationTimestamp="2026-02-02 11:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:19:38.766894776 +0000 UTC m=+1355.771143738" watchObservedRunningTime="2026-02-02 11:19:38.771575941 +0000 UTC m=+1355.775824903" Feb 02 11:19:38 crc kubenswrapper[4925]: I0202 11:19:38.839424 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:19:38 crc kubenswrapper[4925]: W0202 11:19:38.848003 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05c50bd8_7295_4568_b0ea_2f4374bee419.slice/crio-9fa0d30811fc38374079217f100c31e46b564e8f9c1bb62be6fc6e718ebc03d6 WatchSource:0}: Error finding container 9fa0d30811fc38374079217f100c31e46b564e8f9c1bb62be6fc6e718ebc03d6: Status 404 returned error can't find the container with id 9fa0d30811fc38374079217f100c31e46b564e8f9c1bb62be6fc6e718ebc03d6 Feb 02 11:19:39 crc kubenswrapper[4925]: I0202 11:19:39.735161 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c50bd8-7295-4568-b0ea-2f4374bee419","Type":"ContainerStarted","Data":"9fa0d30811fc38374079217f100c31e46b564e8f9c1bb62be6fc6e718ebc03d6"} Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.465273 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r7982"] Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.466263 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r7982" Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.469129 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.469719 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9w4vh" Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.469977 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.493853 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r7982"] Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.601519 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcwfc\" (UniqueName: \"kubernetes.io/projected/49fa273c-1c74-4898-9a16-547d9397e0da-kube-api-access-pcwfc\") pod \"nova-cell0-conductor-db-sync-r7982\" (UID: \"49fa273c-1c74-4898-9a16-547d9397e0da\") " pod="openstack/nova-cell0-conductor-db-sync-r7982" Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.601757 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-r7982\" (UID: \"49fa273c-1c74-4898-9a16-547d9397e0da\") " pod="openstack/nova-cell0-conductor-db-sync-r7982" Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.601869 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-scripts\") pod \"nova-cell0-conductor-db-sync-r7982\" (UID: \"49fa273c-1c74-4898-9a16-547d9397e0da\") " pod="openstack/nova-cell0-conductor-db-sync-r7982" Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.602045 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-config-data\") pod \"nova-cell0-conductor-db-sync-r7982\" (UID: \"49fa273c-1c74-4898-9a16-547d9397e0da\") " pod="openstack/nova-cell0-conductor-db-sync-r7982" Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.704539 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-r7982\" (UID: \"49fa273c-1c74-4898-9a16-547d9397e0da\") " pod="openstack/nova-cell0-conductor-db-sync-r7982" Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.704685 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-scripts\") pod \"nova-cell0-conductor-db-sync-r7982\" (UID: \"49fa273c-1c74-4898-9a16-547d9397e0da\") " pod="openstack/nova-cell0-conductor-db-sync-r7982" Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.704764 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-config-data\") pod \"nova-cell0-conductor-db-sync-r7982\" (UID: \"49fa273c-1c74-4898-9a16-547d9397e0da\") " pod="openstack/nova-cell0-conductor-db-sync-r7982" Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.704835 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcwfc\" (UniqueName: \"kubernetes.io/projected/49fa273c-1c74-4898-9a16-547d9397e0da-kube-api-access-pcwfc\") pod \"nova-cell0-conductor-db-sync-r7982\" (UID: \"49fa273c-1c74-4898-9a16-547d9397e0da\") " pod="openstack/nova-cell0-conductor-db-sync-r7982" Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.710788 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-r7982\" (UID: \"49fa273c-1c74-4898-9a16-547d9397e0da\") " pod="openstack/nova-cell0-conductor-db-sync-r7982" Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.713785 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-config-data\") pod \"nova-cell0-conductor-db-sync-r7982\" (UID: \"49fa273c-1c74-4898-9a16-547d9397e0da\") " pod="openstack/nova-cell0-conductor-db-sync-r7982" Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.716394 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-scripts\") pod \"nova-cell0-conductor-db-sync-r7982\" (UID: \"49fa273c-1c74-4898-9a16-547d9397e0da\") " pod="openstack/nova-cell0-conductor-db-sync-r7982" Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.722981 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcwfc\" (UniqueName: \"kubernetes.io/projected/49fa273c-1c74-4898-9a16-547d9397e0da-kube-api-access-pcwfc\") pod \"nova-cell0-conductor-db-sync-r7982\" (UID: \"49fa273c-1c74-4898-9a16-547d9397e0da\") " pod="openstack/nova-cell0-conductor-db-sync-r7982" Feb 02 11:19:40 crc kubenswrapper[4925]: I0202 11:19:40.795271 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r7982" Feb 02 11:19:41 crc kubenswrapper[4925]: I0202 11:19:41.305627 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r7982"] Feb 02 11:19:41 crc kubenswrapper[4925]: I0202 11:19:41.754978 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c50bd8-7295-4568-b0ea-2f4374bee419","Type":"ContainerStarted","Data":"92fe6b82890bf03ee09b76679160f2ac607e54c3046fe2d2d93e976698fb90c7"} Feb 02 11:19:41 crc kubenswrapper[4925]: I0202 11:19:41.757506 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r7982" event={"ID":"49fa273c-1c74-4898-9a16-547d9397e0da","Type":"ContainerStarted","Data":"55f931783821155c6806001307ca7408a7a45f7e16ab3394a788fe72e9de66e4"} Feb 02 11:19:42 crc kubenswrapper[4925]: I0202 11:19:42.805877 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c50bd8-7295-4568-b0ea-2f4374bee419","Type":"ContainerStarted","Data":"6d858a620d929eef30f8a9ae67b9ef0ed9d9fae765cf67180b008795f1b672d8"} Feb 02 11:19:42 crc kubenswrapper[4925]: I0202 11:19:42.949218 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.052837 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-pxs8x"] Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.053146 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" podUID="abfcab41-3119-4ec5-94ac-32e949f0de93" containerName="dnsmasq-dns" containerID="cri-o://c72a629806592b7e3c7782bfb19c27ee4a6bc469b5bf289034bd2d3511998c13" gracePeriod=10 Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.398512 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.398794 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.642971 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.795020 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-ovsdbserver-sb\") pod \"abfcab41-3119-4ec5-94ac-32e949f0de93\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.795215 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhql9\" (UniqueName: \"kubernetes.io/projected/abfcab41-3119-4ec5-94ac-32e949f0de93-kube-api-access-nhql9\") pod \"abfcab41-3119-4ec5-94ac-32e949f0de93\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.795265 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-ovsdbserver-nb\") pod \"abfcab41-3119-4ec5-94ac-32e949f0de93\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.795304 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-dns-svc\") pod \"abfcab41-3119-4ec5-94ac-32e949f0de93\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.795346 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-config\") pod \"abfcab41-3119-4ec5-94ac-32e949f0de93\" (UID: \"abfcab41-3119-4ec5-94ac-32e949f0de93\") " Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.801497 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abfcab41-3119-4ec5-94ac-32e949f0de93-kube-api-access-nhql9" (OuterVolumeSpecName: "kube-api-access-nhql9") pod "abfcab41-3119-4ec5-94ac-32e949f0de93" (UID: "abfcab41-3119-4ec5-94ac-32e949f0de93"). InnerVolumeSpecName "kube-api-access-nhql9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.818094 4925 generic.go:334] "Generic (PLEG): container finished" podID="abfcab41-3119-4ec5-94ac-32e949f0de93" containerID="c72a629806592b7e3c7782bfb19c27ee4a6bc469b5bf289034bd2d3511998c13" exitCode=0 Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.818175 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" event={"ID":"abfcab41-3119-4ec5-94ac-32e949f0de93","Type":"ContainerDied","Data":"c72a629806592b7e3c7782bfb19c27ee4a6bc469b5bf289034bd2d3511998c13"} Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.818208 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" event={"ID":"abfcab41-3119-4ec5-94ac-32e949f0de93","Type":"ContainerDied","Data":"0567b16946b87cfbb04a4342f2846877363059726a247875ed3dbbbf237454a0"} Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.818229 4925 scope.go:117] "RemoveContainer" containerID="c72a629806592b7e3c7782bfb19c27ee4a6bc469b5bf289034bd2d3511998c13" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.818381 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-pxs8x" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.835743 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c50bd8-7295-4568-b0ea-2f4374bee419","Type":"ContainerStarted","Data":"b01ef7e75072c24cb10d87756fe80563088dcd5ce7811511ef051ead32754f39"} Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.849027 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "abfcab41-3119-4ec5-94ac-32e949f0de93" (UID: "abfcab41-3119-4ec5-94ac-32e949f0de93"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.864410 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "abfcab41-3119-4ec5-94ac-32e949f0de93" (UID: "abfcab41-3119-4ec5-94ac-32e949f0de93"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.868598 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-config" (OuterVolumeSpecName: "config") pod "abfcab41-3119-4ec5-94ac-32e949f0de93" (UID: "abfcab41-3119-4ec5-94ac-32e949f0de93"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.878008 4925 scope.go:117] "RemoveContainer" containerID="a9fe731b2e8b135cc7a0c0ca37087e2a805e5d6a9210d60e3c1b28ab256f2cd8" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.898289 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhql9\" (UniqueName: \"kubernetes.io/projected/abfcab41-3119-4ec5-94ac-32e949f0de93-kube-api-access-nhql9\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.898365 4925 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.898379 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.898393 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.906696 4925 scope.go:117] "RemoveContainer" containerID="c72a629806592b7e3c7782bfb19c27ee4a6bc469b5bf289034bd2d3511998c13" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.906993 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "abfcab41-3119-4ec5-94ac-32e949f0de93" (UID: "abfcab41-3119-4ec5-94ac-32e949f0de93"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:19:43 crc kubenswrapper[4925]: E0202 11:19:43.907562 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c72a629806592b7e3c7782bfb19c27ee4a6bc469b5bf289034bd2d3511998c13\": container with ID starting with c72a629806592b7e3c7782bfb19c27ee4a6bc469b5bf289034bd2d3511998c13 not found: ID does not exist" containerID="c72a629806592b7e3c7782bfb19c27ee4a6bc469b5bf289034bd2d3511998c13" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.907604 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c72a629806592b7e3c7782bfb19c27ee4a6bc469b5bf289034bd2d3511998c13"} err="failed to get container status \"c72a629806592b7e3c7782bfb19c27ee4a6bc469b5bf289034bd2d3511998c13\": rpc error: code = NotFound desc = could not find container \"c72a629806592b7e3c7782bfb19c27ee4a6bc469b5bf289034bd2d3511998c13\": container with ID starting with c72a629806592b7e3c7782bfb19c27ee4a6bc469b5bf289034bd2d3511998c13 not found: ID does not exist" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.907652 4925 scope.go:117] "RemoveContainer" containerID="a9fe731b2e8b135cc7a0c0ca37087e2a805e5d6a9210d60e3c1b28ab256f2cd8" Feb 02 11:19:43 crc kubenswrapper[4925]: E0202 11:19:43.908822 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9fe731b2e8b135cc7a0c0ca37087e2a805e5d6a9210d60e3c1b28ab256f2cd8\": container with ID starting with a9fe731b2e8b135cc7a0c0ca37087e2a805e5d6a9210d60e3c1b28ab256f2cd8 not found: ID does not exist" containerID="a9fe731b2e8b135cc7a0c0ca37087e2a805e5d6a9210d60e3c1b28ab256f2cd8" Feb 02 11:19:43 crc kubenswrapper[4925]: I0202 11:19:43.908877 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9fe731b2e8b135cc7a0c0ca37087e2a805e5d6a9210d60e3c1b28ab256f2cd8"} err="failed to get container status \"a9fe731b2e8b135cc7a0c0ca37087e2a805e5d6a9210d60e3c1b28ab256f2cd8\": rpc error: code = NotFound desc = could not find container \"a9fe731b2e8b135cc7a0c0ca37087e2a805e5d6a9210d60e3c1b28ab256f2cd8\": container with ID starting with a9fe731b2e8b135cc7a0c0ca37087e2a805e5d6a9210d60e3c1b28ab256f2cd8 not found: ID does not exist" Feb 02 11:19:44 crc kubenswrapper[4925]: I0202 11:19:43.999978 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abfcab41-3119-4ec5-94ac-32e949f0de93-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:44 crc kubenswrapper[4925]: I0202 11:19:44.169528 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-pxs8x"] Feb 02 11:19:44 crc kubenswrapper[4925]: I0202 11:19:44.182888 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-pxs8x"] Feb 02 11:19:44 crc kubenswrapper[4925]: I0202 11:19:44.266848 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:44 crc kubenswrapper[4925]: I0202 11:19:44.312517 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:44 crc kubenswrapper[4925]: I0202 11:19:44.681092 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abfcab41-3119-4ec5-94ac-32e949f0de93" path="/var/lib/kubelet/pods/abfcab41-3119-4ec5-94ac-32e949f0de93/volumes" Feb 02 11:19:48 crc kubenswrapper[4925]: I0202 11:19:48.451436 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:48 crc kubenswrapper[4925]: I0202 11:19:48.643826 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8697ffdb94-2bnsl" Feb 02 11:19:48 crc kubenswrapper[4925]: I0202 11:19:48.713068 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6d8bbbb77b-gf2lr"] Feb 02 11:19:48 crc kubenswrapper[4925]: I0202 11:19:48.713307 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" podUID="1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" containerName="barbican-api-log" containerID="cri-o://877a4ed9aafd56145b6170d1e71cd4f29d9ca482097a5e24c5bb5b016eb9b863" gracePeriod=30 Feb 02 11:19:48 crc kubenswrapper[4925]: I0202 11:19:48.713711 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" podUID="1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" containerName="barbican-api" containerID="cri-o://19cd1951effddfbcf7a16acfc3830e1ee38f63b28a4f432f154de8244a3eea81" gracePeriod=30 Feb 02 11:19:48 crc kubenswrapper[4925]: I0202 11:19:48.896308 4925 generic.go:334] "Generic (PLEG): container finished" podID="1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" containerID="877a4ed9aafd56145b6170d1e71cd4f29d9ca482097a5e24c5bb5b016eb9b863" exitCode=143 Feb 02 11:19:48 crc kubenswrapper[4925]: I0202 11:19:48.897355 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" event={"ID":"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3","Type":"ContainerDied","Data":"877a4ed9aafd56145b6170d1e71cd4f29d9ca482097a5e24c5bb5b016eb9b863"} Feb 02 11:19:51 crc kubenswrapper[4925]: I0202 11:19:51.922314 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" podUID="1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": read tcp 10.217.0.2:59278->10.217.0.154:9311: read: connection reset by peer" Feb 02 11:19:51 crc kubenswrapper[4925]: I0202 11:19:51.922394 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" podUID="1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": read tcp 10.217.0.2:59280->10.217.0.154:9311: read: connection reset by peer" Feb 02 11:19:52 crc kubenswrapper[4925]: I0202 11:19:52.942802 4925 generic.go:334] "Generic (PLEG): container finished" podID="1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" containerID="19cd1951effddfbcf7a16acfc3830e1ee38f63b28a4f432f154de8244a3eea81" exitCode=0 Feb 02 11:19:52 crc kubenswrapper[4925]: I0202 11:19:52.943137 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" event={"ID":"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3","Type":"ContainerDied","Data":"19cd1951effddfbcf7a16acfc3830e1ee38f63b28a4f432f154de8244a3eea81"} Feb 02 11:19:55 crc kubenswrapper[4925]: I0202 11:19:55.940011 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:55 crc kubenswrapper[4925]: I0202 11:19:55.994193 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c50bd8-7295-4568-b0ea-2f4374bee419","Type":"ContainerStarted","Data":"224c4788b2d9b5e0645df5b80f49f68e59b893316da1283ac5cef6912a84da96"} Feb 02 11:19:55 crc kubenswrapper[4925]: I0202 11:19:55.994429 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerName="ceilometer-central-agent" containerID="cri-o://92fe6b82890bf03ee09b76679160f2ac607e54c3046fe2d2d93e976698fb90c7" gracePeriod=30 Feb 02 11:19:55 crc kubenswrapper[4925]: I0202 11:19:55.994510 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 11:19:55 crc kubenswrapper[4925]: I0202 11:19:55.994547 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerName="ceilometer-notification-agent" containerID="cri-o://6d858a620d929eef30f8a9ae67b9ef0ed9d9fae765cf67180b008795f1b672d8" gracePeriod=30 Feb 02 11:19:55 crc kubenswrapper[4925]: I0202 11:19:55.994511 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerName="proxy-httpd" containerID="cri-o://224c4788b2d9b5e0645df5b80f49f68e59b893316da1283ac5cef6912a84da96" gracePeriod=30 Feb 02 11:19:55 crc kubenswrapper[4925]: I0202 11:19:55.994526 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerName="sg-core" containerID="cri-o://b01ef7e75072c24cb10d87756fe80563088dcd5ce7811511ef051ead32754f39" gracePeriod=30 Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.060435 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-logs\") pod \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.060515 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-config-data\") pod \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.060633 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-combined-ca-bundle\") pod \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.060680 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g49n\" (UniqueName: \"kubernetes.io/projected/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-kube-api-access-5g49n\") pod \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.060778 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-config-data-custom\") pod \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\" (UID: \"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3\") " Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.061319 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-logs" (OuterVolumeSpecName: "logs") pod "1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" (UID: "1d8df520-67f7-49ce-9cc1-4a7ff28e60c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.061859 4925 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.062022 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.364398218 podStartE2EDuration="19.062003797s" podCreationTimestamp="2026-02-02 11:19:37 +0000 UTC" firstStartedPulling="2026-02-02 11:19:38.851204335 +0000 UTC m=+1355.855453297" lastFinishedPulling="2026-02-02 11:19:55.548809914 +0000 UTC m=+1372.553058876" observedRunningTime="2026-02-02 11:19:56.027070182 +0000 UTC m=+1373.031319164" watchObservedRunningTime="2026-02-02 11:19:56.062003797 +0000 UTC m=+1373.066252769" Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.072298 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-kube-api-access-5g49n" (OuterVolumeSpecName: "kube-api-access-5g49n") pod "1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" (UID: "1d8df520-67f7-49ce-9cc1-4a7ff28e60c3"). InnerVolumeSpecName "kube-api-access-5g49n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.072848 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" (UID: "1d8df520-67f7-49ce-9cc1-4a7ff28e60c3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.074369 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" event={"ID":"1d8df520-67f7-49ce-9cc1-4a7ff28e60c3","Type":"ContainerDied","Data":"2bea0d32eee9f3aa279b298e4904740a83dd76da5e8a44aff76f0a5c222589b4"} Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.074425 4925 scope.go:117] "RemoveContainer" containerID="19cd1951effddfbcf7a16acfc3830e1ee38f63b28a4f432f154de8244a3eea81" Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.074438 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d8bbbb77b-gf2lr" Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.101243 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" (UID: "1d8df520-67f7-49ce-9cc1-4a7ff28e60c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.133400 4925 scope.go:117] "RemoveContainer" containerID="877a4ed9aafd56145b6170d1e71cd4f29d9ca482097a5e24c5bb5b016eb9b863" Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.134316 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-config-data" (OuterVolumeSpecName: "config-data") pod "1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" (UID: "1d8df520-67f7-49ce-9cc1-4a7ff28e60c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.163521 4925 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.163573 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.163585 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.163597 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g49n\" (UniqueName: \"kubernetes.io/projected/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3-kube-api-access-5g49n\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.418641 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6d8bbbb77b-gf2lr"] Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.429783 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6d8bbbb77b-gf2lr"] Feb 02 11:19:56 crc kubenswrapper[4925]: I0202 11:19:56.676477 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" path="/var/lib/kubelet/pods/1d8df520-67f7-49ce-9cc1-4a7ff28e60c3/volumes" Feb 02 11:19:57 crc kubenswrapper[4925]: I0202 11:19:57.099877 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r7982" event={"ID":"49fa273c-1c74-4898-9a16-547d9397e0da","Type":"ContainerStarted","Data":"3c26de6cfefc8ac9af187925e530198add4d54e8708f781cc0adf3aa067fea01"} Feb 02 11:19:57 crc kubenswrapper[4925]: I0202 11:19:57.103701 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d8tqm" event={"ID":"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0","Type":"ContainerStarted","Data":"c435f515ee52e76b55caaa8ade2d4d35413a90e0394d4fb2904420836f56e1a6"} Feb 02 11:19:57 crc kubenswrapper[4925]: I0202 11:19:57.108146 4925 generic.go:334] "Generic (PLEG): container finished" podID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerID="b01ef7e75072c24cb10d87756fe80563088dcd5ce7811511ef051ead32754f39" exitCode=2 Feb 02 11:19:57 crc kubenswrapper[4925]: I0202 11:19:57.108197 4925 generic.go:334] "Generic (PLEG): container finished" podID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerID="92fe6b82890bf03ee09b76679160f2ac607e54c3046fe2d2d93e976698fb90c7" exitCode=0 Feb 02 11:19:57 crc kubenswrapper[4925]: I0202 11:19:57.108223 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c50bd8-7295-4568-b0ea-2f4374bee419","Type":"ContainerDied","Data":"b01ef7e75072c24cb10d87756fe80563088dcd5ce7811511ef051ead32754f39"} Feb 02 11:19:57 crc kubenswrapper[4925]: I0202 11:19:57.108250 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c50bd8-7295-4568-b0ea-2f4374bee419","Type":"ContainerDied","Data":"92fe6b82890bf03ee09b76679160f2ac607e54c3046fe2d2d93e976698fb90c7"} Feb 02 11:19:57 crc kubenswrapper[4925]: I0202 11:19:57.131149 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-r7982" podStartSLOduration=2.721129002 podStartE2EDuration="17.131123147s" podCreationTimestamp="2026-02-02 11:19:40 +0000 UTC" firstStartedPulling="2026-02-02 11:19:41.311319374 +0000 UTC m=+1358.315568326" lastFinishedPulling="2026-02-02 11:19:55.721313509 +0000 UTC m=+1372.725562471" observedRunningTime="2026-02-02 11:19:57.122052355 +0000 UTC m=+1374.126301337" watchObservedRunningTime="2026-02-02 11:19:57.131123147 +0000 UTC m=+1374.135372119" Feb 02 11:19:57 crc kubenswrapper[4925]: I0202 11:19:57.158341 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-d8tqm" podStartSLOduration=11.468794536 podStartE2EDuration="1m52.158319225s" podCreationTimestamp="2026-02-02 11:18:05 +0000 UTC" firstStartedPulling="2026-02-02 11:18:06.521188099 +0000 UTC m=+1263.525437071" lastFinishedPulling="2026-02-02 11:19:47.210712798 +0000 UTC m=+1364.214961760" observedRunningTime="2026-02-02 11:19:57.156103366 +0000 UTC m=+1374.160352338" watchObservedRunningTime="2026-02-02 11:19:57.158319225 +0000 UTC m=+1374.162568187" Feb 02 11:20:00 crc kubenswrapper[4925]: I0202 11:20:00.137952 4925 generic.go:334] "Generic (PLEG): container finished" podID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerID="6d858a620d929eef30f8a9ae67b9ef0ed9d9fae765cf67180b008795f1b672d8" exitCode=0 Feb 02 11:20:00 crc kubenswrapper[4925]: I0202 11:20:00.138027 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c50bd8-7295-4568-b0ea-2f4374bee419","Type":"ContainerDied","Data":"6d858a620d929eef30f8a9ae67b9ef0ed9d9fae765cf67180b008795f1b672d8"} Feb 02 11:20:01 crc kubenswrapper[4925]: I0202 11:20:01.678811 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-596c688466-nwnv5" Feb 02 11:20:01 crc kubenswrapper[4925]: I0202 11:20:01.682754 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-596c688466-nwnv5" Feb 02 11:20:08 crc kubenswrapper[4925]: I0202 11:20:08.209169 4925 generic.go:334] "Generic (PLEG): container finished" podID="e9ab37c5-8a76-48f2-ade7-92735dc062c4" containerID="cf3a517b2cd796ca28b1a9fe64e8cb308993e7a69c4c0bec2db79a6d11eeb1c0" exitCode=0 Feb 02 11:20:08 crc kubenswrapper[4925]: I0202 11:20:08.209269 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9cjl8" event={"ID":"e9ab37c5-8a76-48f2-ade7-92735dc062c4","Type":"ContainerDied","Data":"cf3a517b2cd796ca28b1a9fe64e8cb308993e7a69c4c0bec2db79a6d11eeb1c0"} Feb 02 11:20:08 crc kubenswrapper[4925]: I0202 11:20:08.312516 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 11:20:09 crc kubenswrapper[4925]: I0202 11:20:09.534863 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9cjl8" Feb 02 11:20:09 crc kubenswrapper[4925]: I0202 11:20:09.610202 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9ab37c5-8a76-48f2-ade7-92735dc062c4-config\") pod \"e9ab37c5-8a76-48f2-ade7-92735dc062c4\" (UID: \"e9ab37c5-8a76-48f2-ade7-92735dc062c4\") " Feb 02 11:20:09 crc kubenswrapper[4925]: I0202 11:20:09.610264 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ab37c5-8a76-48f2-ade7-92735dc062c4-combined-ca-bundle\") pod \"e9ab37c5-8a76-48f2-ade7-92735dc062c4\" (UID: \"e9ab37c5-8a76-48f2-ade7-92735dc062c4\") " Feb 02 11:20:09 crc kubenswrapper[4925]: I0202 11:20:09.610425 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smtqs\" (UniqueName: \"kubernetes.io/projected/e9ab37c5-8a76-48f2-ade7-92735dc062c4-kube-api-access-smtqs\") pod \"e9ab37c5-8a76-48f2-ade7-92735dc062c4\" (UID: \"e9ab37c5-8a76-48f2-ade7-92735dc062c4\") " Feb 02 11:20:09 crc kubenswrapper[4925]: I0202 11:20:09.627523 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9ab37c5-8a76-48f2-ade7-92735dc062c4-kube-api-access-smtqs" (OuterVolumeSpecName: "kube-api-access-smtqs") pod "e9ab37c5-8a76-48f2-ade7-92735dc062c4" (UID: "e9ab37c5-8a76-48f2-ade7-92735dc062c4"). InnerVolumeSpecName "kube-api-access-smtqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:20:09 crc kubenswrapper[4925]: I0202 11:20:09.644361 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ab37c5-8a76-48f2-ade7-92735dc062c4-config" (OuterVolumeSpecName: "config") pod "e9ab37c5-8a76-48f2-ade7-92735dc062c4" (UID: "e9ab37c5-8a76-48f2-ade7-92735dc062c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:09 crc kubenswrapper[4925]: I0202 11:20:09.645936 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ab37c5-8a76-48f2-ade7-92735dc062c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9ab37c5-8a76-48f2-ade7-92735dc062c4" (UID: "e9ab37c5-8a76-48f2-ade7-92735dc062c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:09 crc kubenswrapper[4925]: I0202 11:20:09.716398 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9ab37c5-8a76-48f2-ade7-92735dc062c4-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:09 crc kubenswrapper[4925]: I0202 11:20:09.716440 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ab37c5-8a76-48f2-ade7-92735dc062c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:09 crc kubenswrapper[4925]: I0202 11:20:09.716453 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smtqs\" (UniqueName: \"kubernetes.io/projected/e9ab37c5-8a76-48f2-ade7-92735dc062c4-kube-api-access-smtqs\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.227104 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9cjl8" event={"ID":"e9ab37c5-8a76-48f2-ade7-92735dc062c4","Type":"ContainerDied","Data":"cb1e35a0b46084a07acc259727cc77e33fdf1dc1c7b6e1b231b8ab7603a48ccb"} Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.227155 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb1e35a0b46084a07acc259727cc77e33fdf1dc1c7b6e1b231b8ab7603a48ccb" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.227407 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9cjl8" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.486697 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-blgtm"] Feb 02 11:20:10 crc kubenswrapper[4925]: E0202 11:20:10.487401 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ab37c5-8a76-48f2-ade7-92735dc062c4" containerName="neutron-db-sync" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.487420 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ab37c5-8a76-48f2-ade7-92735dc062c4" containerName="neutron-db-sync" Feb 02 11:20:10 crc kubenswrapper[4925]: E0202 11:20:10.487434 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" containerName="barbican-api-log" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.487441 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" containerName="barbican-api-log" Feb 02 11:20:10 crc kubenswrapper[4925]: E0202 11:20:10.487460 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abfcab41-3119-4ec5-94ac-32e949f0de93" containerName="init" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.487466 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="abfcab41-3119-4ec5-94ac-32e949f0de93" containerName="init" Feb 02 11:20:10 crc kubenswrapper[4925]: E0202 11:20:10.487478 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" containerName="barbican-api" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.487484 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" containerName="barbican-api" Feb 02 11:20:10 crc kubenswrapper[4925]: E0202 11:20:10.487496 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abfcab41-3119-4ec5-94ac-32e949f0de93" containerName="dnsmasq-dns" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.487503 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="abfcab41-3119-4ec5-94ac-32e949f0de93" containerName="dnsmasq-dns" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.487659 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" containerName="barbican-api" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.487675 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ab37c5-8a76-48f2-ade7-92735dc062c4" containerName="neutron-db-sync" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.487685 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="abfcab41-3119-4ec5-94ac-32e949f0de93" containerName="dnsmasq-dns" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.487699 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8df520-67f7-49ce-9cc1-4a7ff28e60c3" containerName="barbican-api-log" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.488563 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.495833 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-blgtm"] Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.567877 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5958464848-b4clm"] Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.580798 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.584511 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.584689 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vdmk7" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.589302 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5958464848-b4clm"] Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.590535 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.591330 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.635256 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-blgtm\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.635322 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-config\") pod \"dnsmasq-dns-6bb684768f-blgtm\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.635380 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-dns-svc\") pod \"dnsmasq-dns-6bb684768f-blgtm\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.635402 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-blgtm\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.635424 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbv9p\" (UniqueName: \"kubernetes.io/projected/53f3a16f-3d0e-409d-b652-c5179bb34e2a-kube-api-access-rbv9p\") pod \"dnsmasq-dns-6bb684768f-blgtm\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.737247 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-combined-ca-bundle\") pod \"neutron-5958464848-b4clm\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.737290 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-config\") pod \"neutron-5958464848-b4clm\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.737342 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnx2v\" (UniqueName: \"kubernetes.io/projected/654b5fff-2101-4f2a-9cc1-1a001d28f425-kube-api-access-xnx2v\") pod \"neutron-5958464848-b4clm\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.737375 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-blgtm\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.737400 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-ovndb-tls-certs\") pod \"neutron-5958464848-b4clm\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.737429 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-config\") pod \"dnsmasq-dns-6bb684768f-blgtm\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.737478 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-dns-svc\") pod \"dnsmasq-dns-6bb684768f-blgtm\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.737495 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-blgtm\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.737513 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbv9p\" (UniqueName: \"kubernetes.io/projected/53f3a16f-3d0e-409d-b652-c5179bb34e2a-kube-api-access-rbv9p\") pod \"dnsmasq-dns-6bb684768f-blgtm\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.737547 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-httpd-config\") pod \"neutron-5958464848-b4clm\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.738707 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-config\") pod \"dnsmasq-dns-6bb684768f-blgtm\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.739249 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-dns-svc\") pod \"dnsmasq-dns-6bb684768f-blgtm\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.739681 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-blgtm\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.739777 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-blgtm\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.761671 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbv9p\" (UniqueName: \"kubernetes.io/projected/53f3a16f-3d0e-409d-b652-c5179bb34e2a-kube-api-access-rbv9p\") pod \"dnsmasq-dns-6bb684768f-blgtm\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.815543 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.838899 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-httpd-config\") pod \"neutron-5958464848-b4clm\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.838965 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-combined-ca-bundle\") pod \"neutron-5958464848-b4clm\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.838988 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-config\") pod \"neutron-5958464848-b4clm\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.839023 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnx2v\" (UniqueName: \"kubernetes.io/projected/654b5fff-2101-4f2a-9cc1-1a001d28f425-kube-api-access-xnx2v\") pod \"neutron-5958464848-b4clm\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.839286 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-ovndb-tls-certs\") pod \"neutron-5958464848-b4clm\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.844732 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-combined-ca-bundle\") pod \"neutron-5958464848-b4clm\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.844748 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-httpd-config\") pod \"neutron-5958464848-b4clm\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.845187 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-ovndb-tls-certs\") pod \"neutron-5958464848-b4clm\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.845596 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-config\") pod \"neutron-5958464848-b4clm\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.859915 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnx2v\" (UniqueName: \"kubernetes.io/projected/654b5fff-2101-4f2a-9cc1-1a001d28f425-kube-api-access-xnx2v\") pod \"neutron-5958464848-b4clm\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:10 crc kubenswrapper[4925]: I0202 11:20:10.918508 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:11 crc kubenswrapper[4925]: I0202 11:20:11.304002 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-blgtm"] Feb 02 11:20:11 crc kubenswrapper[4925]: I0202 11:20:11.536651 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5958464848-b4clm"] Feb 02 11:20:12 crc kubenswrapper[4925]: I0202 11:20:12.284373 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5958464848-b4clm" event={"ID":"654b5fff-2101-4f2a-9cc1-1a001d28f425","Type":"ContainerStarted","Data":"68a1a633280acf1c2d2483df656adb891bc37daa1a3ca94a10cd19000b73879f"} Feb 02 11:20:12 crc kubenswrapper[4925]: I0202 11:20:12.284889 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5958464848-b4clm" event={"ID":"654b5fff-2101-4f2a-9cc1-1a001d28f425","Type":"ContainerStarted","Data":"b4a03616d5db2fe43ff929d2b9715f22f4e1fb9edea78a3c50ee5a93b51a064f"} Feb 02 11:20:12 crc kubenswrapper[4925]: I0202 11:20:12.290429 4925 generic.go:334] "Generic (PLEG): container finished" podID="53f3a16f-3d0e-409d-b652-c5179bb34e2a" containerID="fb108db7618fcaf3f8ad414e579e24851a6b3b7ffbca0b29c0a4b3a0bc4bbf93" exitCode=0 Feb 02 11:20:12 crc kubenswrapper[4925]: I0202 11:20:12.290483 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-blgtm" event={"ID":"53f3a16f-3d0e-409d-b652-c5179bb34e2a","Type":"ContainerDied","Data":"fb108db7618fcaf3f8ad414e579e24851a6b3b7ffbca0b29c0a4b3a0bc4bbf93"} Feb 02 11:20:12 crc kubenswrapper[4925]: I0202 11:20:12.290527 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-blgtm" event={"ID":"53f3a16f-3d0e-409d-b652-c5179bb34e2a","Type":"ContainerStarted","Data":"fb0a49347fe95fce3f3ee1c35ec846688dea12a319f19a42857e707e9b8dfbca"} Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.026206 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7755c4bbbc-qkg7f"] Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.028103 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.030360 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.030836 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.043289 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7755c4bbbc-qkg7f"] Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.189501 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-internal-tls-certs\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.189556 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z74gc\" (UniqueName: \"kubernetes.io/projected/66b56382-6514-4567-8b82-42454f43f8d1-kube-api-access-z74gc\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.189589 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-config\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.189675 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-httpd-config\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.189824 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-ovndb-tls-certs\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.189956 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-combined-ca-bundle\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.190013 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-public-tls-certs\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.291879 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-httpd-config\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.291996 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-ovndb-tls-certs\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.292056 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-combined-ca-bundle\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.292183 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-public-tls-certs\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.292865 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-internal-tls-certs\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.292899 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z74gc\" (UniqueName: \"kubernetes.io/projected/66b56382-6514-4567-8b82-42454f43f8d1-kube-api-access-z74gc\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.292935 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-config\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.298904 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-internal-tls-certs\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.299229 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-combined-ca-bundle\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.301445 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-public-tls-certs\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.302934 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-ovndb-tls-certs\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.303234 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-config\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.303706 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/66b56382-6514-4567-8b82-42454f43f8d1-httpd-config\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.314165 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-blgtm" event={"ID":"53f3a16f-3d0e-409d-b652-c5179bb34e2a","Type":"ContainerStarted","Data":"eaf04ffb01bf821ea4c76b7d96a541e359cf4b83af4e1f6ecbe9a0dd97a20a43"} Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.314276 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.317993 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5958464848-b4clm" event={"ID":"654b5fff-2101-4f2a-9cc1-1a001d28f425","Type":"ContainerStarted","Data":"eb1ca3101da3434c92e4496c45ed59803da2865d7f4bddf4d14ff50c0e7f38e1"} Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.318710 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.329457 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z74gc\" (UniqueName: \"kubernetes.io/projected/66b56382-6514-4567-8b82-42454f43f8d1-kube-api-access-z74gc\") pod \"neutron-7755c4bbbc-qkg7f\" (UID: \"66b56382-6514-4567-8b82-42454f43f8d1\") " pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.341189 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb684768f-blgtm" podStartSLOduration=3.341166805 podStartE2EDuration="3.341166805s" podCreationTimestamp="2026-02-02 11:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:20:13.337913798 +0000 UTC m=+1390.342162760" watchObservedRunningTime="2026-02-02 11:20:13.341166805 +0000 UTC m=+1390.345415787" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.362144 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.398550 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.398644 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.406548 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5958464848-b4clm" podStartSLOduration=3.406528254 podStartE2EDuration="3.406528254s" podCreationTimestamp="2026-02-02 11:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:20:13.370903941 +0000 UTC m=+1390.375152913" watchObservedRunningTime="2026-02-02 11:20:13.406528254 +0000 UTC m=+1390.410777216" Feb 02 11:20:13 crc kubenswrapper[4925]: I0202 11:20:13.907287 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7755c4bbbc-qkg7f"] Feb 02 11:20:14 crc kubenswrapper[4925]: I0202 11:20:14.329166 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7755c4bbbc-qkg7f" event={"ID":"66b56382-6514-4567-8b82-42454f43f8d1","Type":"ContainerStarted","Data":"56f852f391678cd8a64b14d39b0f0e5a196c80f51281e4e99d1a8c783b0b2775"} Feb 02 11:20:14 crc kubenswrapper[4925]: I0202 11:20:14.329568 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7755c4bbbc-qkg7f" event={"ID":"66b56382-6514-4567-8b82-42454f43f8d1","Type":"ContainerStarted","Data":"46549f96766c8ae74e99db08589abb9d5361479dbe09a3de5b7f703145545c8c"} Feb 02 11:20:15 crc kubenswrapper[4925]: I0202 11:20:15.339799 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7755c4bbbc-qkg7f" event={"ID":"66b56382-6514-4567-8b82-42454f43f8d1","Type":"ContainerStarted","Data":"8f6e64f8d28ae97d7f844d6d3b189d2705a4bfc704c86cd986efa3d52749e29f"} Feb 02 11:20:15 crc kubenswrapper[4925]: I0202 11:20:15.340219 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:15 crc kubenswrapper[4925]: I0202 11:20:15.342651 4925 generic.go:334] "Generic (PLEG): container finished" podID="b26690f1-3d10-4ef3-a16a-7c33dc1c62c0" containerID="c435f515ee52e76b55caaa8ade2d4d35413a90e0394d4fb2904420836f56e1a6" exitCode=0 Feb 02 11:20:15 crc kubenswrapper[4925]: I0202 11:20:15.342688 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d8tqm" event={"ID":"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0","Type":"ContainerDied","Data":"c435f515ee52e76b55caaa8ade2d4d35413a90e0394d4fb2904420836f56e1a6"} Feb 02 11:20:15 crc kubenswrapper[4925]: I0202 11:20:15.364272 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7755c4bbbc-qkg7f" podStartSLOduration=2.3642501940000002 podStartE2EDuration="2.364250194s" podCreationTimestamp="2026-02-02 11:20:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:20:15.35776857 +0000 UTC m=+1392.362017532" watchObservedRunningTime="2026-02-02 11:20:15.364250194 +0000 UTC m=+1392.368499166" Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.757053 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.856449 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-scripts\") pod \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.856585 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-etc-machine-id\") pod \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.856638 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-config-data\") pod \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.856721 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-combined-ca-bundle\") pod \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.856788 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hxzm\" (UniqueName: \"kubernetes.io/projected/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-kube-api-access-6hxzm\") pod \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.856838 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-db-sync-config-data\") pod \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\" (UID: \"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0\") " Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.857389 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b26690f1-3d10-4ef3-a16a-7c33dc1c62c0" (UID: "b26690f1-3d10-4ef3-a16a-7c33dc1c62c0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.857707 4925 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.864284 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-scripts" (OuterVolumeSpecName: "scripts") pod "b26690f1-3d10-4ef3-a16a-7c33dc1c62c0" (UID: "b26690f1-3d10-4ef3-a16a-7c33dc1c62c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.864435 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-kube-api-access-6hxzm" (OuterVolumeSpecName: "kube-api-access-6hxzm") pod "b26690f1-3d10-4ef3-a16a-7c33dc1c62c0" (UID: "b26690f1-3d10-4ef3-a16a-7c33dc1c62c0"). InnerVolumeSpecName "kube-api-access-6hxzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.866399 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b26690f1-3d10-4ef3-a16a-7c33dc1c62c0" (UID: "b26690f1-3d10-4ef3-a16a-7c33dc1c62c0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.903340 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b26690f1-3d10-4ef3-a16a-7c33dc1c62c0" (UID: "b26690f1-3d10-4ef3-a16a-7c33dc1c62c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.912174 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-config-data" (OuterVolumeSpecName: "config-data") pod "b26690f1-3d10-4ef3-a16a-7c33dc1c62c0" (UID: "b26690f1-3d10-4ef3-a16a-7c33dc1c62c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.959563 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hxzm\" (UniqueName: \"kubernetes.io/projected/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-kube-api-access-6hxzm\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.959603 4925 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.959618 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.959631 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:16 crc kubenswrapper[4925]: I0202 11:20:16.959642 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.359479 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d8tqm" event={"ID":"b26690f1-3d10-4ef3-a16a-7c33dc1c62c0","Type":"ContainerDied","Data":"018be16b54d7ab34681f9d91c19d40a5645a9f82dbfc62ff97d016e6638e8e13"} Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.359825 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="018be16b54d7ab34681f9d91c19d40a5645a9f82dbfc62ff97d016e6638e8e13" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.359573 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d8tqm" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.620754 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 11:20:17 crc kubenswrapper[4925]: E0202 11:20:17.621201 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26690f1-3d10-4ef3-a16a-7c33dc1c62c0" containerName="cinder-db-sync" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.621216 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26690f1-3d10-4ef3-a16a-7c33dc1c62c0" containerName="cinder-db-sync" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.621423 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26690f1-3d10-4ef3-a16a-7c33dc1c62c0" containerName="cinder-db-sync" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.622534 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.625632 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-dx7sq" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.625765 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.625871 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.626677 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.639720 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.689855 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-blgtm"] Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.690559 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb684768f-blgtm" podUID="53f3a16f-3d0e-409d-b652-c5179bb34e2a" containerName="dnsmasq-dns" containerID="cri-o://eaf04ffb01bf821ea4c76b7d96a541e359cf4b83af4e1f6ecbe9a0dd97a20a43" gracePeriod=10 Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.692288 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.773998 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-scripts\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.774097 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.774119 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-config-data\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.774256 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c0eea36-0532-4d2f-9106-5e7283073cf5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.774316 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h9cl\" (UniqueName: \"kubernetes.io/projected/7c0eea36-0532-4d2f-9106-5e7283073cf5-kube-api-access-8h9cl\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.774456 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.876425 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.876475 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-config-data\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.876508 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c0eea36-0532-4d2f-9106-5e7283073cf5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.876541 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h9cl\" (UniqueName: \"kubernetes.io/projected/7c0eea36-0532-4d2f-9106-5e7283073cf5-kube-api-access-8h9cl\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.876607 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.876636 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-scripts\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.876661 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c0eea36-0532-4d2f-9106-5e7283073cf5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.881583 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-config-data\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.882011 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-scripts\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.882031 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:17 crc kubenswrapper[4925]: I0202 11:20:17.882586 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.031799 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h9cl\" (UniqueName: \"kubernetes.io/projected/7c0eea36-0532-4d2f-9106-5e7283073cf5-kube-api-access-8h9cl\") pod \"cinder-scheduler-0\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.039588 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-64lxq"] Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.042168 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.058101 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-64lxq"] Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.078570 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-64lxq\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.078624 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-config\") pod \"dnsmasq-dns-6d97fcdd8f-64lxq\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.078661 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zlnl\" (UniqueName: \"kubernetes.io/projected/18037f81-0d3a-411d-a0f3-275b5422276e-kube-api-access-9zlnl\") pod \"dnsmasq-dns-6d97fcdd8f-64lxq\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.078677 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-64lxq\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.078697 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-64lxq\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.108905 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.114737 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.116899 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.129221 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.180833 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-scripts\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.180890 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7728c259-3944-4f6b-baad-0e09278bba1b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.180959 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7728c259-3944-4f6b-baad-0e09278bba1b-logs\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.180988 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-64lxq\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.181011 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.181030 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-config-data\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.181053 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-config\") pod \"dnsmasq-dns-6d97fcdd8f-64lxq\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.181109 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zlnl\" (UniqueName: \"kubernetes.io/projected/18037f81-0d3a-411d-a0f3-275b5422276e-kube-api-access-9zlnl\") pod \"dnsmasq-dns-6d97fcdd8f-64lxq\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.181126 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-64lxq\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.181148 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-64lxq\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.181183 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-config-data-custom\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.181206 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wbrp\" (UniqueName: \"kubernetes.io/projected/7728c259-3944-4f6b-baad-0e09278bba1b-kube-api-access-9wbrp\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.182209 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-64lxq\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.183776 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-64lxq\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.183959 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-config\") pod \"dnsmasq-dns-6d97fcdd8f-64lxq\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.184703 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-64lxq\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.240688 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zlnl\" (UniqueName: \"kubernetes.io/projected/18037f81-0d3a-411d-a0f3-275b5422276e-kube-api-access-9zlnl\") pod \"dnsmasq-dns-6d97fcdd8f-64lxq\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.243471 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.282402 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-config-data-custom\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.282459 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wbrp\" (UniqueName: \"kubernetes.io/projected/7728c259-3944-4f6b-baad-0e09278bba1b-kube-api-access-9wbrp\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.282503 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-scripts\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.282528 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7728c259-3944-4f6b-baad-0e09278bba1b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.282574 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7728c259-3944-4f6b-baad-0e09278bba1b-logs\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.282609 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.282631 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-config-data\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.283743 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7728c259-3944-4f6b-baad-0e09278bba1b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.283782 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7728c259-3944-4f6b-baad-0e09278bba1b-logs\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.286805 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-config-data\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.287422 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-scripts\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.287814 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-config-data-custom\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.295884 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.306461 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wbrp\" (UniqueName: \"kubernetes.io/projected/7728c259-3944-4f6b-baad-0e09278bba1b-kube-api-access-9wbrp\") pod \"cinder-api-0\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.374461 4925 generic.go:334] "Generic (PLEG): container finished" podID="53f3a16f-3d0e-409d-b652-c5179bb34e2a" containerID="eaf04ffb01bf821ea4c76b7d96a541e359cf4b83af4e1f6ecbe9a0dd97a20a43" exitCode=0 Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.374513 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-blgtm" event={"ID":"53f3a16f-3d0e-409d-b652-c5179bb34e2a","Type":"ContainerDied","Data":"eaf04ffb01bf821ea4c76b7d96a541e359cf4b83af4e1f6ecbe9a0dd97a20a43"} Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.480472 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.485567 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.577137 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.701776 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-dns-svc\") pod \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.701829 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-config\") pod \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.701851 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-ovsdbserver-sb\") pod \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.701970 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-ovsdbserver-nb\") pod \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.702026 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbv9p\" (UniqueName: \"kubernetes.io/projected/53f3a16f-3d0e-409d-b652-c5179bb34e2a-kube-api-access-rbv9p\") pod \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\" (UID: \"53f3a16f-3d0e-409d-b652-c5179bb34e2a\") " Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.724542 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53f3a16f-3d0e-409d-b652-c5179bb34e2a-kube-api-access-rbv9p" (OuterVolumeSpecName: "kube-api-access-rbv9p") pod "53f3a16f-3d0e-409d-b652-c5179bb34e2a" (UID: "53f3a16f-3d0e-409d-b652-c5179bb34e2a"). InnerVolumeSpecName "kube-api-access-rbv9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.779910 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-config" (OuterVolumeSpecName: "config") pod "53f3a16f-3d0e-409d-b652-c5179bb34e2a" (UID: "53f3a16f-3d0e-409d-b652-c5179bb34e2a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.784928 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "53f3a16f-3d0e-409d-b652-c5179bb34e2a" (UID: "53f3a16f-3d0e-409d-b652-c5179bb34e2a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.784937 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "53f3a16f-3d0e-409d-b652-c5179bb34e2a" (UID: "53f3a16f-3d0e-409d-b652-c5179bb34e2a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.787050 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "53f3a16f-3d0e-409d-b652-c5179bb34e2a" (UID: "53f3a16f-3d0e-409d-b652-c5179bb34e2a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.805449 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbv9p\" (UniqueName: \"kubernetes.io/projected/53f3a16f-3d0e-409d-b652-c5179bb34e2a-kube-api-access-rbv9p\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.805485 4925 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.805495 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.805503 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.805512 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53f3a16f-3d0e-409d-b652-c5179bb34e2a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:18 crc kubenswrapper[4925]: I0202 11:20:18.939209 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 11:20:19 crc kubenswrapper[4925]: I0202 11:20:19.081726 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-64lxq"] Feb 02 11:20:19 crc kubenswrapper[4925]: I0202 11:20:19.168271 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 11:20:19 crc kubenswrapper[4925]: I0202 11:20:19.384357 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7728c259-3944-4f6b-baad-0e09278bba1b","Type":"ContainerStarted","Data":"5a164e83904201705c3c0d519b349f535d744d9fc8894ef7144e7b82d6d950f8"} Feb 02 11:20:19 crc kubenswrapper[4925]: I0202 11:20:19.385929 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7c0eea36-0532-4d2f-9106-5e7283073cf5","Type":"ContainerStarted","Data":"921a286ccd5b045d4eb61af5c5ca783af4dc7204574c55d9768a369b69f83768"} Feb 02 11:20:19 crc kubenswrapper[4925]: I0202 11:20:19.387425 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" event={"ID":"18037f81-0d3a-411d-a0f3-275b5422276e","Type":"ContainerStarted","Data":"2fb159b4b93a0ebd8d67d252e47fc14e85a66a47dda4cab4c3413ce4521e5373"} Feb 02 11:20:19 crc kubenswrapper[4925]: I0202 11:20:19.387484 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" event={"ID":"18037f81-0d3a-411d-a0f3-275b5422276e","Type":"ContainerStarted","Data":"e10517e4fd5146d1ef8af9ecc1194299996096c6e180cdaa4a52c21bf0d2cff1"} Feb 02 11:20:19 crc kubenswrapper[4925]: I0202 11:20:19.391336 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-blgtm" event={"ID":"53f3a16f-3d0e-409d-b652-c5179bb34e2a","Type":"ContainerDied","Data":"fb0a49347fe95fce3f3ee1c35ec846688dea12a319f19a42857e707e9b8dfbca"} Feb 02 11:20:19 crc kubenswrapper[4925]: I0202 11:20:19.391370 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-blgtm" Feb 02 11:20:19 crc kubenswrapper[4925]: I0202 11:20:19.391397 4925 scope.go:117] "RemoveContainer" containerID="eaf04ffb01bf821ea4c76b7d96a541e359cf4b83af4e1f6ecbe9a0dd97a20a43" Feb 02 11:20:19 crc kubenswrapper[4925]: I0202 11:20:19.444048 4925 scope.go:117] "RemoveContainer" containerID="fb108db7618fcaf3f8ad414e579e24851a6b3b7ffbca0b29c0a4b3a0bc4bbf93" Feb 02 11:20:19 crc kubenswrapper[4925]: I0202 11:20:19.486299 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-blgtm"] Feb 02 11:20:19 crc kubenswrapper[4925]: I0202 11:20:19.498314 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-blgtm"] Feb 02 11:20:20 crc kubenswrapper[4925]: I0202 11:20:20.019019 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 11:20:20 crc kubenswrapper[4925]: I0202 11:20:20.404246 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7728c259-3944-4f6b-baad-0e09278bba1b","Type":"ContainerStarted","Data":"963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b"} Feb 02 11:20:20 crc kubenswrapper[4925]: I0202 11:20:20.417217 4925 generic.go:334] "Generic (PLEG): container finished" podID="18037f81-0d3a-411d-a0f3-275b5422276e" containerID="2fb159b4b93a0ebd8d67d252e47fc14e85a66a47dda4cab4c3413ce4521e5373" exitCode=0 Feb 02 11:20:20 crc kubenswrapper[4925]: I0202 11:20:20.417281 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" event={"ID":"18037f81-0d3a-411d-a0f3-275b5422276e","Type":"ContainerDied","Data":"2fb159b4b93a0ebd8d67d252e47fc14e85a66a47dda4cab4c3413ce4521e5373"} Feb 02 11:20:20 crc kubenswrapper[4925]: I0202 11:20:20.685257 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53f3a16f-3d0e-409d-b652-c5179bb34e2a" path="/var/lib/kubelet/pods/53f3a16f-3d0e-409d-b652-c5179bb34e2a/volumes" Feb 02 11:20:21 crc kubenswrapper[4925]: I0202 11:20:21.431211 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7728c259-3944-4f6b-baad-0e09278bba1b","Type":"ContainerStarted","Data":"bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2"} Feb 02 11:20:21 crc kubenswrapper[4925]: I0202 11:20:21.431564 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7728c259-3944-4f6b-baad-0e09278bba1b" containerName="cinder-api-log" containerID="cri-o://963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b" gracePeriod=30 Feb 02 11:20:21 crc kubenswrapper[4925]: I0202 11:20:21.431747 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7728c259-3944-4f6b-baad-0e09278bba1b" containerName="cinder-api" containerID="cri-o://bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2" gracePeriod=30 Feb 02 11:20:21 crc kubenswrapper[4925]: I0202 11:20:21.431932 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 11:20:21 crc kubenswrapper[4925]: I0202 11:20:21.434673 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7c0eea36-0532-4d2f-9106-5e7283073cf5","Type":"ContainerStarted","Data":"7c95b2ef69ea98fc2ae585b4805c4057c66d8597cd6eb9b3c91c8bf64958ed82"} Feb 02 11:20:21 crc kubenswrapper[4925]: I0202 11:20:21.434702 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7c0eea36-0532-4d2f-9106-5e7283073cf5","Type":"ContainerStarted","Data":"fca3a9bb1bf1d8b6f62123742bf0dab954bf7b2eb7128de88931364df32f5494"} Feb 02 11:20:21 crc kubenswrapper[4925]: I0202 11:20:21.439522 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" event={"ID":"18037f81-0d3a-411d-a0f3-275b5422276e","Type":"ContainerStarted","Data":"039ce53c0eb82d28fa6ee176b0d559c8ddc0e460a41c60ea9c968202a475eaae"} Feb 02 11:20:21 crc kubenswrapper[4925]: I0202 11:20:21.439780 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:21 crc kubenswrapper[4925]: I0202 11:20:21.455731 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.455714045 podStartE2EDuration="3.455714045s" podCreationTimestamp="2026-02-02 11:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:20:21.450593498 +0000 UTC m=+1398.454842460" watchObservedRunningTime="2026-02-02 11:20:21.455714045 +0000 UTC m=+1398.459963007" Feb 02 11:20:21 crc kubenswrapper[4925]: I0202 11:20:21.477688 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.613163668 podStartE2EDuration="4.477668173s" podCreationTimestamp="2026-02-02 11:20:17 +0000 UTC" firstStartedPulling="2026-02-02 11:20:18.943438555 +0000 UTC m=+1395.947687517" lastFinishedPulling="2026-02-02 11:20:19.80794304 +0000 UTC m=+1396.812192022" observedRunningTime="2026-02-02 11:20:21.474601951 +0000 UTC m=+1398.478850913" watchObservedRunningTime="2026-02-02 11:20:21.477668173 +0000 UTC m=+1398.481917135" Feb 02 11:20:21 crc kubenswrapper[4925]: I0202 11:20:21.499556 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" podStartSLOduration=4.499532598 podStartE2EDuration="4.499532598s" podCreationTimestamp="2026-02-02 11:20:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:20:21.490918938 +0000 UTC m=+1398.495167920" watchObservedRunningTime="2026-02-02 11:20:21.499532598 +0000 UTC m=+1398.503781580" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.078241 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.178757 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-config-data-custom\") pod \"7728c259-3944-4f6b-baad-0e09278bba1b\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.179214 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-combined-ca-bundle\") pod \"7728c259-3944-4f6b-baad-0e09278bba1b\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.179235 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-config-data\") pod \"7728c259-3944-4f6b-baad-0e09278bba1b\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.179275 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-scripts\") pod \"7728c259-3944-4f6b-baad-0e09278bba1b\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.179343 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wbrp\" (UniqueName: \"kubernetes.io/projected/7728c259-3944-4f6b-baad-0e09278bba1b-kube-api-access-9wbrp\") pod \"7728c259-3944-4f6b-baad-0e09278bba1b\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.179376 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7728c259-3944-4f6b-baad-0e09278bba1b-logs\") pod \"7728c259-3944-4f6b-baad-0e09278bba1b\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.179483 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7728c259-3944-4f6b-baad-0e09278bba1b-etc-machine-id\") pod \"7728c259-3944-4f6b-baad-0e09278bba1b\" (UID: \"7728c259-3944-4f6b-baad-0e09278bba1b\") " Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.179651 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7728c259-3944-4f6b-baad-0e09278bba1b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7728c259-3944-4f6b-baad-0e09278bba1b" (UID: "7728c259-3944-4f6b-baad-0e09278bba1b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.179800 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7728c259-3944-4f6b-baad-0e09278bba1b-logs" (OuterVolumeSpecName: "logs") pod "7728c259-3944-4f6b-baad-0e09278bba1b" (UID: "7728c259-3944-4f6b-baad-0e09278bba1b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.180128 4925 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7728c259-3944-4f6b-baad-0e09278bba1b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.180153 4925 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7728c259-3944-4f6b-baad-0e09278bba1b-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.186491 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7728c259-3944-4f6b-baad-0e09278bba1b-kube-api-access-9wbrp" (OuterVolumeSpecName: "kube-api-access-9wbrp") pod "7728c259-3944-4f6b-baad-0e09278bba1b" (UID: "7728c259-3944-4f6b-baad-0e09278bba1b"). InnerVolumeSpecName "kube-api-access-9wbrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.192693 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7728c259-3944-4f6b-baad-0e09278bba1b" (UID: "7728c259-3944-4f6b-baad-0e09278bba1b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.194172 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-scripts" (OuterVolumeSpecName: "scripts") pod "7728c259-3944-4f6b-baad-0e09278bba1b" (UID: "7728c259-3944-4f6b-baad-0e09278bba1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.212209 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7728c259-3944-4f6b-baad-0e09278bba1b" (UID: "7728c259-3944-4f6b-baad-0e09278bba1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.259210 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-config-data" (OuterVolumeSpecName: "config-data") pod "7728c259-3944-4f6b-baad-0e09278bba1b" (UID: "7728c259-3944-4f6b-baad-0e09278bba1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.281880 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wbrp\" (UniqueName: \"kubernetes.io/projected/7728c259-3944-4f6b-baad-0e09278bba1b-kube-api-access-9wbrp\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.281922 4925 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.281935 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.281948 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.281962 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7728c259-3944-4f6b-baad-0e09278bba1b-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.449874 4925 generic.go:334] "Generic (PLEG): container finished" podID="7728c259-3944-4f6b-baad-0e09278bba1b" containerID="bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2" exitCode=0 Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.449916 4925 generic.go:334] "Generic (PLEG): container finished" podID="7728c259-3944-4f6b-baad-0e09278bba1b" containerID="963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b" exitCode=143 Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.449937 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.449976 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7728c259-3944-4f6b-baad-0e09278bba1b","Type":"ContainerDied","Data":"bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2"} Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.450052 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7728c259-3944-4f6b-baad-0e09278bba1b","Type":"ContainerDied","Data":"963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b"} Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.450065 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7728c259-3944-4f6b-baad-0e09278bba1b","Type":"ContainerDied","Data":"5a164e83904201705c3c0d519b349f535d744d9fc8894ef7144e7b82d6d950f8"} Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.450096 4925 scope.go:117] "RemoveContainer" containerID="bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.490064 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.495753 4925 scope.go:117] "RemoveContainer" containerID="963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.506469 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.519859 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 11:20:22 crc kubenswrapper[4925]: E0202 11:20:22.520601 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f3a16f-3d0e-409d-b652-c5179bb34e2a" containerName="dnsmasq-dns" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.520627 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f3a16f-3d0e-409d-b652-c5179bb34e2a" containerName="dnsmasq-dns" Feb 02 11:20:22 crc kubenswrapper[4925]: E0202 11:20:22.520657 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f3a16f-3d0e-409d-b652-c5179bb34e2a" containerName="init" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.520667 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f3a16f-3d0e-409d-b652-c5179bb34e2a" containerName="init" Feb 02 11:20:22 crc kubenswrapper[4925]: E0202 11:20:22.520684 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7728c259-3944-4f6b-baad-0e09278bba1b" containerName="cinder-api-log" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.520692 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="7728c259-3944-4f6b-baad-0e09278bba1b" containerName="cinder-api-log" Feb 02 11:20:22 crc kubenswrapper[4925]: E0202 11:20:22.520705 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7728c259-3944-4f6b-baad-0e09278bba1b" containerName="cinder-api" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.520714 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="7728c259-3944-4f6b-baad-0e09278bba1b" containerName="cinder-api" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.520919 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="7728c259-3944-4f6b-baad-0e09278bba1b" containerName="cinder-api-log" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.520939 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="7728c259-3944-4f6b-baad-0e09278bba1b" containerName="cinder-api" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.520952 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f3a16f-3d0e-409d-b652-c5179bb34e2a" containerName="dnsmasq-dns" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.522029 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.524402 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.525024 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.525360 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.531298 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.548254 4925 scope.go:117] "RemoveContainer" containerID="bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2" Feb 02 11:20:22 crc kubenswrapper[4925]: E0202 11:20:22.549758 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2\": container with ID starting with bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2 not found: ID does not exist" containerID="bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.549791 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2"} err="failed to get container status \"bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2\": rpc error: code = NotFound desc = could not find container \"bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2\": container with ID starting with bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2 not found: ID does not exist" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.549821 4925 scope.go:117] "RemoveContainer" containerID="963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b" Feb 02 11:20:22 crc kubenswrapper[4925]: E0202 11:20:22.550353 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b\": container with ID starting with 963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b not found: ID does not exist" containerID="963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.550404 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b"} err="failed to get container status \"963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b\": rpc error: code = NotFound desc = could not find container \"963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b\": container with ID starting with 963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b not found: ID does not exist" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.550437 4925 scope.go:117] "RemoveContainer" containerID="bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.550730 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2"} err="failed to get container status \"bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2\": rpc error: code = NotFound desc = could not find container \"bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2\": container with ID starting with bbc63ae404674070835207a7f0fd6c94677c4f93a5a90df635615e9364e19ed2 not found: ID does not exist" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.550752 4925 scope.go:117] "RemoveContainer" containerID="963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.551060 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b"} err="failed to get container status \"963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b\": rpc error: code = NotFound desc = could not find container \"963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b\": container with ID starting with 963f13c8a3e513a54273e8f40cca2f99a67b5fc8f881331aed1fab6e80c8f74b not found: ID does not exist" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.675688 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7728c259-3944-4f6b-baad-0e09278bba1b" path="/var/lib/kubelet/pods/7728c259-3944-4f6b-baad-0e09278bba1b/volumes" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.688114 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/314e8cb6-036e-4365-9056-026caca906f1-logs\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.688198 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-config-data-custom\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.688224 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-config-data\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.688243 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/314e8cb6-036e-4365-9056-026caca906f1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.688257 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.688406 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.688451 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-scripts\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.688550 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljlbj\" (UniqueName: \"kubernetes.io/projected/314e8cb6-036e-4365-9056-026caca906f1-kube-api-access-ljlbj\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.688662 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.790626 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/314e8cb6-036e-4365-9056-026caca906f1-logs\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.790723 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-config-data-custom\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.790749 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-config-data\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.790764 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.790779 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/314e8cb6-036e-4365-9056-026caca906f1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.790805 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-scripts\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.790819 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.790858 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljlbj\" (UniqueName: \"kubernetes.io/projected/314e8cb6-036e-4365-9056-026caca906f1-kube-api-access-ljlbj\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.790897 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.791277 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/314e8cb6-036e-4365-9056-026caca906f1-logs\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.791357 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/314e8cb6-036e-4365-9056-026caca906f1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.795802 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-config-data-custom\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.795900 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.799579 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.799709 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.799997 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-scripts\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.800390 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314e8cb6-036e-4365-9056-026caca906f1-config-data\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.810532 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljlbj\" (UniqueName: \"kubernetes.io/projected/314e8cb6-036e-4365-9056-026caca906f1-kube-api-access-ljlbj\") pod \"cinder-api-0\" (UID: \"314e8cb6-036e-4365-9056-026caca906f1\") " pod="openstack/cinder-api-0" Feb 02 11:20:22 crc kubenswrapper[4925]: I0202 11:20:22.839752 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 11:20:23 crc kubenswrapper[4925]: I0202 11:20:23.244517 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 11:20:23 crc kubenswrapper[4925]: I0202 11:20:23.323001 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 11:20:23 crc kubenswrapper[4925]: I0202 11:20:23.459447 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"314e8cb6-036e-4365-9056-026caca906f1","Type":"ContainerStarted","Data":"2e13aee2257f0099501261bc824578a21aa868ee8444de1a203286400c94b488"} Feb 02 11:20:24 crc kubenswrapper[4925]: I0202 11:20:24.470566 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"314e8cb6-036e-4365-9056-026caca906f1","Type":"ContainerStarted","Data":"d078f90c438e9f26f064b01956f14993e8eadf0273658f285ebe82ef868e4c5a"} Feb 02 11:20:24 crc kubenswrapper[4925]: I0202 11:20:24.471264 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 11:20:24 crc kubenswrapper[4925]: I0202 11:20:24.471284 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"314e8cb6-036e-4365-9056-026caca906f1","Type":"ContainerStarted","Data":"0e43f37cc51fd21a09063eb63c63a133ae08cfb2b4b90e1f018a2bad2f12c279"} Feb 02 11:20:24 crc kubenswrapper[4925]: I0202 11:20:24.499044 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.499026377 podStartE2EDuration="2.499026377s" podCreationTimestamp="2026-02-02 11:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:20:24.493976202 +0000 UTC m=+1401.498225164" watchObservedRunningTime="2026-02-02 11:20:24.499026377 +0000 UTC m=+1401.503275329" Feb 02 11:20:26 crc kubenswrapper[4925]: W0202 11:20:26.047998 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53f3a16f_3d0e_409d_b652_c5179bb34e2a.slice/crio-fb108db7618fcaf3f8ad414e579e24851a6b3b7ffbca0b29c0a4b3a0bc4bbf93.scope WatchSource:0}: Error finding container fb108db7618fcaf3f8ad414e579e24851a6b3b7ffbca0b29c0a4b3a0bc4bbf93: Status 404 returned error can't find the container with id fb108db7618fcaf3f8ad414e579e24851a6b3b7ffbca0b29c0a4b3a0bc4bbf93 Feb 02 11:20:26 crc kubenswrapper[4925]: W0202 11:20:26.055455 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53f3a16f_3d0e_409d_b652_c5179bb34e2a.slice/crio-eaf04ffb01bf821ea4c76b7d96a541e359cf4b83af4e1f6ecbe9a0dd97a20a43.scope WatchSource:0}: Error finding container eaf04ffb01bf821ea4c76b7d96a541e359cf4b83af4e1f6ecbe9a0dd97a20a43: Status 404 returned error can't find the container with id eaf04ffb01bf821ea4c76b7d96a541e359cf4b83af4e1f6ecbe9a0dd97a20a43 Feb 02 11:20:26 crc kubenswrapper[4925]: W0202 11:20:26.064159 4925 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7728c259_3944_4f6b_baad_0e09278bba1b.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7728c259_3944_4f6b_baad_0e09278bba1b.slice: no such file or directory Feb 02 11:20:26 crc kubenswrapper[4925]: E0202 11:20:26.254319 4925 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53f3a16f_3d0e_409d_b652_c5179bb34e2a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53f3a16f_3d0e_409d_b652_c5179bb34e2a.slice/crio-conmon-eaf04ffb01bf821ea4c76b7d96a541e359cf4b83af4e1f6ecbe9a0dd97a20a43.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb26690f1_3d10_4ef3_a16a_7c33dc1c62c0.slice/crio-conmon-c435f515ee52e76b55caaa8ade2d4d35413a90e0394d4fb2904420836f56e1a6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53f3a16f_3d0e_409d_b652_c5179bb34e2a.slice/crio-fb0a49347fe95fce3f3ee1c35ec846688dea12a319f19a42857e707e9b8dfbca\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05c50bd8_7295_4568_b0ea_2f4374bee419.slice/crio-224c4788b2d9b5e0645df5b80f49f68e59b893316da1283ac5cef6912a84da96.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb26690f1_3d10_4ef3_a16a_7c33dc1c62c0.slice/crio-c435f515ee52e76b55caaa8ade2d4d35413a90e0394d4fb2904420836f56e1a6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05c50bd8_7295_4568_b0ea_2f4374bee419.slice/crio-conmon-224c4788b2d9b5e0645df5b80f49f68e59b893316da1283ac5cef6912a84da96.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb26690f1_3d10_4ef3_a16a_7c33dc1c62c0.slice/crio-018be16b54d7ab34681f9d91c19d40a5645a9f82dbfc62ff97d016e6638e8e13\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb26690f1_3d10_4ef3_a16a_7c33dc1c62c0.slice\": RecentStats: unable to find data in memory cache]" Feb 02 11:20:26 crc kubenswrapper[4925]: I0202 11:20:26.487984 4925 generic.go:334] "Generic (PLEG): container finished" podID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerID="224c4788b2d9b5e0645df5b80f49f68e59b893316da1283ac5cef6912a84da96" exitCode=137 Feb 02 11:20:26 crc kubenswrapper[4925]: I0202 11:20:26.488034 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c50bd8-7295-4568-b0ea-2f4374bee419","Type":"ContainerDied","Data":"224c4788b2d9b5e0645df5b80f49f68e59b893316da1283ac5cef6912a84da96"} Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.002971 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.166912 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-scripts\") pod \"05c50bd8-7295-4568-b0ea-2f4374bee419\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.166991 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-combined-ca-bundle\") pod \"05c50bd8-7295-4568-b0ea-2f4374bee419\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.167023 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c50bd8-7295-4568-b0ea-2f4374bee419-log-httpd\") pod \"05c50bd8-7295-4568-b0ea-2f4374bee419\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.167140 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-config-data\") pod \"05c50bd8-7295-4568-b0ea-2f4374bee419\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.167171 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm268\" (UniqueName: \"kubernetes.io/projected/05c50bd8-7295-4568-b0ea-2f4374bee419-kube-api-access-hm268\") pod \"05c50bd8-7295-4568-b0ea-2f4374bee419\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.167201 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-sg-core-conf-yaml\") pod \"05c50bd8-7295-4568-b0ea-2f4374bee419\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.167240 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c50bd8-7295-4568-b0ea-2f4374bee419-run-httpd\") pod \"05c50bd8-7295-4568-b0ea-2f4374bee419\" (UID: \"05c50bd8-7295-4568-b0ea-2f4374bee419\") " Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.167472 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05c50bd8-7295-4568-b0ea-2f4374bee419-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "05c50bd8-7295-4568-b0ea-2f4374bee419" (UID: "05c50bd8-7295-4568-b0ea-2f4374bee419"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.167696 4925 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c50bd8-7295-4568-b0ea-2f4374bee419-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.167854 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05c50bd8-7295-4568-b0ea-2f4374bee419-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "05c50bd8-7295-4568-b0ea-2f4374bee419" (UID: "05c50bd8-7295-4568-b0ea-2f4374bee419"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.173323 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-scripts" (OuterVolumeSpecName: "scripts") pod "05c50bd8-7295-4568-b0ea-2f4374bee419" (UID: "05c50bd8-7295-4568-b0ea-2f4374bee419"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.173477 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c50bd8-7295-4568-b0ea-2f4374bee419-kube-api-access-hm268" (OuterVolumeSpecName: "kube-api-access-hm268") pod "05c50bd8-7295-4568-b0ea-2f4374bee419" (UID: "05c50bd8-7295-4568-b0ea-2f4374bee419"). InnerVolumeSpecName "kube-api-access-hm268". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.194592 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "05c50bd8-7295-4568-b0ea-2f4374bee419" (UID: "05c50bd8-7295-4568-b0ea-2f4374bee419"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.237336 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05c50bd8-7295-4568-b0ea-2f4374bee419" (UID: "05c50bd8-7295-4568-b0ea-2f4374bee419"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.255695 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-config-data" (OuterVolumeSpecName: "config-data") pod "05c50bd8-7295-4568-b0ea-2f4374bee419" (UID: "05c50bd8-7295-4568-b0ea-2f4374bee419"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.269941 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.269990 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.270002 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.270011 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm268\" (UniqueName: \"kubernetes.io/projected/05c50bd8-7295-4568-b0ea-2f4374bee419-kube-api-access-hm268\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.270021 4925 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05c50bd8-7295-4568-b0ea-2f4374bee419-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.270029 4925 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05c50bd8-7295-4568-b0ea-2f4374bee419-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.500123 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05c50bd8-7295-4568-b0ea-2f4374bee419","Type":"ContainerDied","Data":"9fa0d30811fc38374079217f100c31e46b564e8f9c1bb62be6fc6e718ebc03d6"} Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.500179 4925 scope.go:117] "RemoveContainer" containerID="224c4788b2d9b5e0645df5b80f49f68e59b893316da1283ac5cef6912a84da96" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.500303 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.522291 4925 scope.go:117] "RemoveContainer" containerID="b01ef7e75072c24cb10d87756fe80563088dcd5ce7811511ef051ead32754f39" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.540278 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.549345 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.555230 4925 scope.go:117] "RemoveContainer" containerID="6d858a620d929eef30f8a9ae67b9ef0ed9d9fae765cf67180b008795f1b672d8" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.580236 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:20:27 crc kubenswrapper[4925]: E0202 11:20:27.580641 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerName="ceilometer-central-agent" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.580661 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerName="ceilometer-central-agent" Feb 02 11:20:27 crc kubenswrapper[4925]: E0202 11:20:27.580698 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerName="proxy-httpd" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.580706 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerName="proxy-httpd" Feb 02 11:20:27 crc kubenswrapper[4925]: E0202 11:20:27.580723 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerName="sg-core" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.580731 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerName="sg-core" Feb 02 11:20:27 crc kubenswrapper[4925]: E0202 11:20:27.580739 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerName="ceilometer-notification-agent" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.580744 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerName="ceilometer-notification-agent" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.580911 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerName="sg-core" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.580923 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerName="ceilometer-central-agent" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.580937 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerName="ceilometer-notification-agent" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.580947 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" containerName="proxy-httpd" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.582828 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.587688 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.587980 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.588027 4925 scope.go:117] "RemoveContainer" containerID="92fe6b82890bf03ee09b76679160f2ac607e54c3046fe2d2d93e976698fb90c7" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.593020 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.679057 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.679154 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-log-httpd\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.679203 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-config-data\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.679308 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-scripts\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.679543 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-run-httpd\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.679616 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.679696 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llrnm\" (UniqueName: \"kubernetes.io/projected/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-kube-api-access-llrnm\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.782056 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-scripts\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.782756 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-run-httpd\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.782785 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.782844 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llrnm\" (UniqueName: \"kubernetes.io/projected/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-kube-api-access-llrnm\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.782930 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.783726 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-log-httpd\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.783776 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-config-data\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.784356 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-log-httpd\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.784484 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-run-httpd\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.805834 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-scripts\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.809446 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.809702 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.810879 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-config-data\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.814903 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llrnm\" (UniqueName: \"kubernetes.io/projected/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-kube-api-access-llrnm\") pod \"ceilometer-0\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " pod="openstack/ceilometer-0" Feb 02 11:20:27 crc kubenswrapper[4925]: I0202 11:20:27.915470 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:20:28 crc kubenswrapper[4925]: I0202 11:20:28.375606 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:20:28 crc kubenswrapper[4925]: I0202 11:20:28.482487 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 11:20:28 crc kubenswrapper[4925]: I0202 11:20:28.487412 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:28 crc kubenswrapper[4925]: I0202 11:20:28.510045 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5","Type":"ContainerStarted","Data":"45a139414aee4f5ec2721b689b7e16c7cc41ebf955dd1f81ee10a03981fd4fc0"} Feb 02 11:20:28 crc kubenswrapper[4925]: I0202 11:20:28.527770 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 11:20:28 crc kubenswrapper[4925]: I0202 11:20:28.528062 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7c0eea36-0532-4d2f-9106-5e7283073cf5" containerName="cinder-scheduler" containerID="cri-o://fca3a9bb1bf1d8b6f62123742bf0dab954bf7b2eb7128de88931364df32f5494" gracePeriod=30 Feb 02 11:20:28 crc kubenswrapper[4925]: I0202 11:20:28.528142 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7c0eea36-0532-4d2f-9106-5e7283073cf5" containerName="probe" containerID="cri-o://7c95b2ef69ea98fc2ae585b4805c4057c66d8597cd6eb9b3c91c8bf64958ed82" gracePeriod=30 Feb 02 11:20:28 crc kubenswrapper[4925]: I0202 11:20:28.578992 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-qhp2h"] Feb 02 11:20:28 crc kubenswrapper[4925]: I0202 11:20:28.579562 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-699df9757c-qhp2h" podUID="6453ce80-6db5-49dd-a57a-2ba72b63fad6" containerName="dnsmasq-dns" containerID="cri-o://08fde4f4564f48b644d7d0acc9c134548960fcbc3d0efcacf6435520503196be" gracePeriod=10 Feb 02 11:20:28 crc kubenswrapper[4925]: I0202 11:20:28.678276 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05c50bd8-7295-4568-b0ea-2f4374bee419" path="/var/lib/kubelet/pods/05c50bd8-7295-4568-b0ea-2f4374bee419/volumes" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.189199 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.308044 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-ovsdbserver-sb\") pod \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.308107 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-ovsdbserver-nb\") pod \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.308192 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-config\") pod \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.308286 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-dns-svc\") pod \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.308341 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvg9f\" (UniqueName: \"kubernetes.io/projected/6453ce80-6db5-49dd-a57a-2ba72b63fad6-kube-api-access-wvg9f\") pod \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\" (UID: \"6453ce80-6db5-49dd-a57a-2ba72b63fad6\") " Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.326526 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6453ce80-6db5-49dd-a57a-2ba72b63fad6-kube-api-access-wvg9f" (OuterVolumeSpecName: "kube-api-access-wvg9f") pod "6453ce80-6db5-49dd-a57a-2ba72b63fad6" (UID: "6453ce80-6db5-49dd-a57a-2ba72b63fad6"). InnerVolumeSpecName "kube-api-access-wvg9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.378886 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6453ce80-6db5-49dd-a57a-2ba72b63fad6" (UID: "6453ce80-6db5-49dd-a57a-2ba72b63fad6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.392414 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-config" (OuterVolumeSpecName: "config") pod "6453ce80-6db5-49dd-a57a-2ba72b63fad6" (UID: "6453ce80-6db5-49dd-a57a-2ba72b63fad6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.410031 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.410086 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvg9f\" (UniqueName: \"kubernetes.io/projected/6453ce80-6db5-49dd-a57a-2ba72b63fad6-kube-api-access-wvg9f\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.410102 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.412559 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6453ce80-6db5-49dd-a57a-2ba72b63fad6" (UID: "6453ce80-6db5-49dd-a57a-2ba72b63fad6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.418450 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6453ce80-6db5-49dd-a57a-2ba72b63fad6" (UID: "6453ce80-6db5-49dd-a57a-2ba72b63fad6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.512208 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.512566 4925 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6453ce80-6db5-49dd-a57a-2ba72b63fad6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.535630 4925 generic.go:334] "Generic (PLEG): container finished" podID="7c0eea36-0532-4d2f-9106-5e7283073cf5" containerID="7c95b2ef69ea98fc2ae585b4805c4057c66d8597cd6eb9b3c91c8bf64958ed82" exitCode=0 Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.535755 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7c0eea36-0532-4d2f-9106-5e7283073cf5","Type":"ContainerDied","Data":"7c95b2ef69ea98fc2ae585b4805c4057c66d8597cd6eb9b3c91c8bf64958ed82"} Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.541948 4925 generic.go:334] "Generic (PLEG): container finished" podID="6453ce80-6db5-49dd-a57a-2ba72b63fad6" containerID="08fde4f4564f48b644d7d0acc9c134548960fcbc3d0efcacf6435520503196be" exitCode=0 Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.541998 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-qhp2h" event={"ID":"6453ce80-6db5-49dd-a57a-2ba72b63fad6","Type":"ContainerDied","Data":"08fde4f4564f48b644d7d0acc9c134548960fcbc3d0efcacf6435520503196be"} Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.542027 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-qhp2h" event={"ID":"6453ce80-6db5-49dd-a57a-2ba72b63fad6","Type":"ContainerDied","Data":"c3bfe8a095070dc534fab851bbfc29ba132c8953ac83886bd54af27189d25e70"} Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.542039 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-qhp2h" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.542046 4925 scope.go:117] "RemoveContainer" containerID="08fde4f4564f48b644d7d0acc9c134548960fcbc3d0efcacf6435520503196be" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.577243 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-qhp2h"] Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.586258 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-qhp2h"] Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.712301 4925 scope.go:117] "RemoveContainer" containerID="cf176bd4f1da098a3ece3acbf01261b429a1b11bfc5d9f863d3ac1e487ff9012" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.895476 4925 scope.go:117] "RemoveContainer" containerID="08fde4f4564f48b644d7d0acc9c134548960fcbc3d0efcacf6435520503196be" Feb 02 11:20:29 crc kubenswrapper[4925]: E0202 11:20:29.896070 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08fde4f4564f48b644d7d0acc9c134548960fcbc3d0efcacf6435520503196be\": container with ID starting with 08fde4f4564f48b644d7d0acc9c134548960fcbc3d0efcacf6435520503196be not found: ID does not exist" containerID="08fde4f4564f48b644d7d0acc9c134548960fcbc3d0efcacf6435520503196be" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.896139 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08fde4f4564f48b644d7d0acc9c134548960fcbc3d0efcacf6435520503196be"} err="failed to get container status \"08fde4f4564f48b644d7d0acc9c134548960fcbc3d0efcacf6435520503196be\": rpc error: code = NotFound desc = could not find container \"08fde4f4564f48b644d7d0acc9c134548960fcbc3d0efcacf6435520503196be\": container with ID starting with 08fde4f4564f48b644d7d0acc9c134548960fcbc3d0efcacf6435520503196be not found: ID does not exist" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.896173 4925 scope.go:117] "RemoveContainer" containerID="cf176bd4f1da098a3ece3acbf01261b429a1b11bfc5d9f863d3ac1e487ff9012" Feb 02 11:20:29 crc kubenswrapper[4925]: E0202 11:20:29.896499 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf176bd4f1da098a3ece3acbf01261b429a1b11bfc5d9f863d3ac1e487ff9012\": container with ID starting with cf176bd4f1da098a3ece3acbf01261b429a1b11bfc5d9f863d3ac1e487ff9012 not found: ID does not exist" containerID="cf176bd4f1da098a3ece3acbf01261b429a1b11bfc5d9f863d3ac1e487ff9012" Feb 02 11:20:29 crc kubenswrapper[4925]: I0202 11:20:29.896543 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf176bd4f1da098a3ece3acbf01261b429a1b11bfc5d9f863d3ac1e487ff9012"} err="failed to get container status \"cf176bd4f1da098a3ece3acbf01261b429a1b11bfc5d9f863d3ac1e487ff9012\": rpc error: code = NotFound desc = could not find container \"cf176bd4f1da098a3ece3acbf01261b429a1b11bfc5d9f863d3ac1e487ff9012\": container with ID starting with cf176bd4f1da098a3ece3acbf01261b429a1b11bfc5d9f863d3ac1e487ff9012 not found: ID does not exist" Feb 02 11:20:30 crc kubenswrapper[4925]: I0202 11:20:30.550303 4925 generic.go:334] "Generic (PLEG): container finished" podID="49fa273c-1c74-4898-9a16-547d9397e0da" containerID="3c26de6cfefc8ac9af187925e530198add4d54e8708f781cc0adf3aa067fea01" exitCode=0 Feb 02 11:20:30 crc kubenswrapper[4925]: I0202 11:20:30.550390 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r7982" event={"ID":"49fa273c-1c74-4898-9a16-547d9397e0da","Type":"ContainerDied","Data":"3c26de6cfefc8ac9af187925e530198add4d54e8708f781cc0adf3aa067fea01"} Feb 02 11:20:30 crc kubenswrapper[4925]: I0202 11:20:30.553416 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5","Type":"ContainerStarted","Data":"1c3cdf8e5f3bc2118fb0a1b18ab5476f371f50cf9a65c30038f9e9f8651a3b11"} Feb 02 11:20:30 crc kubenswrapper[4925]: I0202 11:20:30.675222 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6453ce80-6db5-49dd-a57a-2ba72b63fad6" path="/var/lib/kubelet/pods/6453ce80-6db5-49dd-a57a-2ba72b63fad6/volumes" Feb 02 11:20:31 crc kubenswrapper[4925]: I0202 11:20:31.564723 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5","Type":"ContainerStarted","Data":"0b8364fbbe1d88a3a775e887e55b327abc2d0074b6151a25cdb20720676fd49f"} Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.004939 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r7982" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.163130 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-config-data\") pod \"49fa273c-1c74-4898-9a16-547d9397e0da\" (UID: \"49fa273c-1c74-4898-9a16-547d9397e0da\") " Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.163601 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-combined-ca-bundle\") pod \"49fa273c-1c74-4898-9a16-547d9397e0da\" (UID: \"49fa273c-1c74-4898-9a16-547d9397e0da\") " Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.163810 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcwfc\" (UniqueName: \"kubernetes.io/projected/49fa273c-1c74-4898-9a16-547d9397e0da-kube-api-access-pcwfc\") pod \"49fa273c-1c74-4898-9a16-547d9397e0da\" (UID: \"49fa273c-1c74-4898-9a16-547d9397e0da\") " Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.163933 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-scripts\") pod \"49fa273c-1c74-4898-9a16-547d9397e0da\" (UID: \"49fa273c-1c74-4898-9a16-547d9397e0da\") " Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.168182 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-scripts" (OuterVolumeSpecName: "scripts") pod "49fa273c-1c74-4898-9a16-547d9397e0da" (UID: "49fa273c-1c74-4898-9a16-547d9397e0da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.168342 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49fa273c-1c74-4898-9a16-547d9397e0da-kube-api-access-pcwfc" (OuterVolumeSpecName: "kube-api-access-pcwfc") pod "49fa273c-1c74-4898-9a16-547d9397e0da" (UID: "49fa273c-1c74-4898-9a16-547d9397e0da"). InnerVolumeSpecName "kube-api-access-pcwfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.193233 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49fa273c-1c74-4898-9a16-547d9397e0da" (UID: "49fa273c-1c74-4898-9a16-547d9397e0da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.194975 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-config-data" (OuterVolumeSpecName: "config-data") pod "49fa273c-1c74-4898-9a16-547d9397e0da" (UID: "49fa273c-1c74-4898-9a16-547d9397e0da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.266378 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.266418 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.266431 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fa273c-1c74-4898-9a16-547d9397e0da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.266447 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcwfc\" (UniqueName: \"kubernetes.io/projected/49fa273c-1c74-4898-9a16-547d9397e0da-kube-api-access-pcwfc\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.575368 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r7982" event={"ID":"49fa273c-1c74-4898-9a16-547d9397e0da","Type":"ContainerDied","Data":"55f931783821155c6806001307ca7408a7a45f7e16ab3394a788fe72e9de66e4"} Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.575680 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55f931783821155c6806001307ca7408a7a45f7e16ab3394a788fe72e9de66e4" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.575395 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r7982" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.577848 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7c0eea36-0532-4d2f-9106-5e7283073cf5","Type":"ContainerDied","Data":"fca3a9bb1bf1d8b6f62123742bf0dab954bf7b2eb7128de88931364df32f5494"} Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.577786 4925 generic.go:334] "Generic (PLEG): container finished" podID="7c0eea36-0532-4d2f-9106-5e7283073cf5" containerID="fca3a9bb1bf1d8b6f62123742bf0dab954bf7b2eb7128de88931364df32f5494" exitCode=0 Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.580766 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5","Type":"ContainerStarted","Data":"bb3d9af736b4b8552aa3d4d031897eaf61c2868ab752a76154c1413e259ef78e"} Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.588673 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.674451 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-config-data-custom\") pod \"7c0eea36-0532-4d2f-9106-5e7283073cf5\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.674496 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h9cl\" (UniqueName: \"kubernetes.io/projected/7c0eea36-0532-4d2f-9106-5e7283073cf5-kube-api-access-8h9cl\") pod \"7c0eea36-0532-4d2f-9106-5e7283073cf5\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.674515 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-combined-ca-bundle\") pod \"7c0eea36-0532-4d2f-9106-5e7283073cf5\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.674589 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c0eea36-0532-4d2f-9106-5e7283073cf5-etc-machine-id\") pod \"7c0eea36-0532-4d2f-9106-5e7283073cf5\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.674618 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-config-data\") pod \"7c0eea36-0532-4d2f-9106-5e7283073cf5\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.674687 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-scripts\") pod \"7c0eea36-0532-4d2f-9106-5e7283073cf5\" (UID: \"7c0eea36-0532-4d2f-9106-5e7283073cf5\") " Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.676304 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0eea36-0532-4d2f-9106-5e7283073cf5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7c0eea36-0532-4d2f-9106-5e7283073cf5" (UID: "7c0eea36-0532-4d2f-9106-5e7283073cf5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.686234 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-scripts" (OuterVolumeSpecName: "scripts") pod "7c0eea36-0532-4d2f-9106-5e7283073cf5" (UID: "7c0eea36-0532-4d2f-9106-5e7283073cf5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.686281 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7c0eea36-0532-4d2f-9106-5e7283073cf5" (UID: "7c0eea36-0532-4d2f-9106-5e7283073cf5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.692521 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0eea36-0532-4d2f-9106-5e7283073cf5-kube-api-access-8h9cl" (OuterVolumeSpecName: "kube-api-access-8h9cl") pod "7c0eea36-0532-4d2f-9106-5e7283073cf5" (UID: "7c0eea36-0532-4d2f-9106-5e7283073cf5"). InnerVolumeSpecName "kube-api-access-8h9cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.776609 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.778838 4925 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.778851 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h9cl\" (UniqueName: \"kubernetes.io/projected/7c0eea36-0532-4d2f-9106-5e7283073cf5-kube-api-access-8h9cl\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.778860 4925 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c0eea36-0532-4d2f-9106-5e7283073cf5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.792535 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c0eea36-0532-4d2f-9106-5e7283073cf5" (UID: "7c0eea36-0532-4d2f-9106-5e7283073cf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.846997 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 11:20:32 crc kubenswrapper[4925]: E0202 11:20:32.847424 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fa273c-1c74-4898-9a16-547d9397e0da" containerName="nova-cell0-conductor-db-sync" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.847489 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fa273c-1c74-4898-9a16-547d9397e0da" containerName="nova-cell0-conductor-db-sync" Feb 02 11:20:32 crc kubenswrapper[4925]: E0202 11:20:32.847508 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0eea36-0532-4d2f-9106-5e7283073cf5" containerName="probe" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.847516 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0eea36-0532-4d2f-9106-5e7283073cf5" containerName="probe" Feb 02 11:20:32 crc kubenswrapper[4925]: E0202 11:20:32.847620 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6453ce80-6db5-49dd-a57a-2ba72b63fad6" containerName="dnsmasq-dns" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.847637 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6453ce80-6db5-49dd-a57a-2ba72b63fad6" containerName="dnsmasq-dns" Feb 02 11:20:32 crc kubenswrapper[4925]: E0202 11:20:32.847647 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0eea36-0532-4d2f-9106-5e7283073cf5" containerName="cinder-scheduler" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.847657 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0eea36-0532-4d2f-9106-5e7283073cf5" containerName="cinder-scheduler" Feb 02 11:20:32 crc kubenswrapper[4925]: E0202 11:20:32.847670 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6453ce80-6db5-49dd-a57a-2ba72b63fad6" containerName="init" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.847678 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6453ce80-6db5-49dd-a57a-2ba72b63fad6" containerName="init" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.847878 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="6453ce80-6db5-49dd-a57a-2ba72b63fad6" containerName="dnsmasq-dns" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.847901 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fa273c-1c74-4898-9a16-547d9397e0da" containerName="nova-cell0-conductor-db-sync" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.847918 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0eea36-0532-4d2f-9106-5e7283073cf5" containerName="cinder-scheduler" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.847933 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0eea36-0532-4d2f-9106-5e7283073cf5" containerName="probe" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.848492 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.848576 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.851684 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.851845 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9w4vh" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.881137 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.881990 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-config-data" (OuterVolumeSpecName: "config-data") pod "7c0eea36-0532-4d2f-9106-5e7283073cf5" (UID: "7c0eea36-0532-4d2f-9106-5e7283073cf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.984567 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b85lt\" (UniqueName: \"kubernetes.io/projected/8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c-kube-api-access-b85lt\") pod \"nova-cell0-conductor-0\" (UID: \"8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c\") " pod="openstack/nova-cell0-conductor-0" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.984739 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c\") " pod="openstack/nova-cell0-conductor-0" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.984839 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c\") " pod="openstack/nova-cell0-conductor-0" Feb 02 11:20:32 crc kubenswrapper[4925]: I0202 11:20:32.986920 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0eea36-0532-4d2f-9106-5e7283073cf5-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.089386 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b85lt\" (UniqueName: \"kubernetes.io/projected/8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c-kube-api-access-b85lt\") pod \"nova-cell0-conductor-0\" (UID: \"8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c\") " pod="openstack/nova-cell0-conductor-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.089625 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c\") " pod="openstack/nova-cell0-conductor-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.089677 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c\") " pod="openstack/nova-cell0-conductor-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.094725 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c\") " pod="openstack/nova-cell0-conductor-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.095749 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c\") " pod="openstack/nova-cell0-conductor-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.115968 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b85lt\" (UniqueName: \"kubernetes.io/projected/8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c-kube-api-access-b85lt\") pod \"nova-cell0-conductor-0\" (UID: \"8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c\") " pod="openstack/nova-cell0-conductor-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.171791 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.587623 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.592585 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7c0eea36-0532-4d2f-9106-5e7283073cf5","Type":"ContainerDied","Data":"921a286ccd5b045d4eb61af5c5ca783af4dc7204574c55d9768a369b69f83768"} Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.592652 4925 scope.go:117] "RemoveContainer" containerID="7c95b2ef69ea98fc2ae585b4805c4057c66d8597cd6eb9b3c91c8bf64958ed82" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.592820 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.621783 4925 scope.go:117] "RemoveContainer" containerID="fca3a9bb1bf1d8b6f62123742bf0dab954bf7b2eb7128de88931364df32f5494" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.645396 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.652783 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.675632 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.678302 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.681099 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.716650 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.719426 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.719457 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.720269 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.720408 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xtx6\" (UniqueName: \"kubernetes.io/projected/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-kube-api-access-6xtx6\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.720508 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-scripts\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.720576 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-config-data\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.821261 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.821308 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.821336 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.821412 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xtx6\" (UniqueName: \"kubernetes.io/projected/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-kube-api-access-6xtx6\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.821464 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-scripts\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.821492 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-config-data\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.821846 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.828375 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-scripts\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.828753 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.842785 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-config-data\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.847802 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xtx6\" (UniqueName: \"kubernetes.io/projected/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-kube-api-access-6xtx6\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:33 crc kubenswrapper[4925]: I0202 11:20:33.863854 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a0d1352-9215-4b6e-831a-d9d654cc8a1e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2a0d1352-9215-4b6e-831a-d9d654cc8a1e\") " pod="openstack/cinder-scheduler-0" Feb 02 11:20:34 crc kubenswrapper[4925]: I0202 11:20:34.092153 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:20:34 crc kubenswrapper[4925]: I0202 11:20:34.115915 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 11:20:34 crc kubenswrapper[4925]: I0202 11:20:34.628701 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c","Type":"ContainerStarted","Data":"c144664ea8fce1af6f76f3e661d4e922bfaeee84f815cdc62f717a78ce3f7cf6"} Feb 02 11:20:34 crc kubenswrapper[4925]: I0202 11:20:34.629198 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 02 11:20:34 crc kubenswrapper[4925]: I0202 11:20:34.629213 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c","Type":"ContainerStarted","Data":"97f626ba82c0a6498b71f69662deff58ee3a6089806d8ad8b706738662f3b4e8"} Feb 02 11:20:34 crc kubenswrapper[4925]: I0202 11:20:34.653991 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.653972634 podStartE2EDuration="2.653972634s" podCreationTimestamp="2026-02-02 11:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:20:34.646638598 +0000 UTC m=+1411.650887560" watchObservedRunningTime="2026-02-02 11:20:34.653972634 +0000 UTC m=+1411.658221596" Feb 02 11:20:34 crc kubenswrapper[4925]: I0202 11:20:34.789295 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0eea36-0532-4d2f-9106-5e7283073cf5" path="/var/lib/kubelet/pods/7c0eea36-0532-4d2f-9106-5e7283073cf5/volumes" Feb 02 11:20:35 crc kubenswrapper[4925]: I0202 11:20:35.048266 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 02 11:20:35 crc kubenswrapper[4925]: I0202 11:20:35.098951 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 11:20:35 crc kubenswrapper[4925]: I0202 11:20:35.642530 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5","Type":"ContainerStarted","Data":"0ef01eebd351c44a7555b45804b19d15327777a1bca419140dde38773e84010f"} Feb 02 11:20:35 crc kubenswrapper[4925]: I0202 11:20:35.643599 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerName="ceilometer-central-agent" containerID="cri-o://1c3cdf8e5f3bc2118fb0a1b18ab5476f371f50cf9a65c30038f9e9f8651a3b11" gracePeriod=30 Feb 02 11:20:35 crc kubenswrapper[4925]: I0202 11:20:35.643776 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerName="sg-core" containerID="cri-o://bb3d9af736b4b8552aa3d4d031897eaf61c2868ab752a76154c1413e259ef78e" gracePeriod=30 Feb 02 11:20:35 crc kubenswrapper[4925]: I0202 11:20:35.643810 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerName="proxy-httpd" containerID="cri-o://0ef01eebd351c44a7555b45804b19d15327777a1bca419140dde38773e84010f" gracePeriod=30 Feb 02 11:20:35 crc kubenswrapper[4925]: I0202 11:20:35.643913 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerName="ceilometer-notification-agent" containerID="cri-o://0b8364fbbe1d88a3a775e887e55b327abc2d0074b6151a25cdb20720676fd49f" gracePeriod=30 Feb 02 11:20:35 crc kubenswrapper[4925]: I0202 11:20:35.644850 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 11:20:35 crc kubenswrapper[4925]: I0202 11:20:35.649342 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a0d1352-9215-4b6e-831a-d9d654cc8a1e","Type":"ContainerStarted","Data":"1d0124c1734d81a77248bc9460a381a675895b0645f998eacab2bfd6ad3a77d5"} Feb 02 11:20:35 crc kubenswrapper[4925]: I0202 11:20:35.689256 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.507018015 podStartE2EDuration="8.689066513s" podCreationTimestamp="2026-02-02 11:20:27 +0000 UTC" firstStartedPulling="2026-02-02 11:20:28.386440118 +0000 UTC m=+1405.390689080" lastFinishedPulling="2026-02-02 11:20:34.568488616 +0000 UTC m=+1411.572737578" observedRunningTime="2026-02-02 11:20:35.68186269 +0000 UTC m=+1412.686111652" watchObservedRunningTime="2026-02-02 11:20:35.689066513 +0000 UTC m=+1412.693315485" Feb 02 11:20:36 crc kubenswrapper[4925]: I0202 11:20:36.674297 4925 generic.go:334] "Generic (PLEG): container finished" podID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerID="bb3d9af736b4b8552aa3d4d031897eaf61c2868ab752a76154c1413e259ef78e" exitCode=2 Feb 02 11:20:36 crc kubenswrapper[4925]: I0202 11:20:36.674861 4925 generic.go:334] "Generic (PLEG): container finished" podID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerID="0b8364fbbe1d88a3a775e887e55b327abc2d0074b6151a25cdb20720676fd49f" exitCode=0 Feb 02 11:20:36 crc kubenswrapper[4925]: I0202 11:20:36.674877 4925 generic.go:334] "Generic (PLEG): container finished" podID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerID="1c3cdf8e5f3bc2118fb0a1b18ab5476f371f50cf9a65c30038f9e9f8651a3b11" exitCode=0 Feb 02 11:20:36 crc kubenswrapper[4925]: I0202 11:20:36.683170 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5","Type":"ContainerDied","Data":"bb3d9af736b4b8552aa3d4d031897eaf61c2868ab752a76154c1413e259ef78e"} Feb 02 11:20:36 crc kubenswrapper[4925]: I0202 11:20:36.683223 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5","Type":"ContainerDied","Data":"0b8364fbbe1d88a3a775e887e55b327abc2d0074b6151a25cdb20720676fd49f"} Feb 02 11:20:36 crc kubenswrapper[4925]: I0202 11:20:36.683239 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5","Type":"ContainerDied","Data":"1c3cdf8e5f3bc2118fb0a1b18ab5476f371f50cf9a65c30038f9e9f8651a3b11"} Feb 02 11:20:36 crc kubenswrapper[4925]: I0202 11:20:36.683251 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a0d1352-9215-4b6e-831a-d9d654cc8a1e","Type":"ContainerStarted","Data":"4ce9ea0e987e43a62cc7fb22350080eb626bc00a243fa0870f47bc74df172209"} Feb 02 11:20:36 crc kubenswrapper[4925]: I0202 11:20:36.683277 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a0d1352-9215-4b6e-831a-d9d654cc8a1e","Type":"ContainerStarted","Data":"a8bc3005440c7568b264ee626935376562410e1890f2eaf19a2cb8b4ee99c770"} Feb 02 11:20:36 crc kubenswrapper[4925]: I0202 11:20:36.712186 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.712163992 podStartE2EDuration="3.712163992s" podCreationTimestamp="2026-02-02 11:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:20:36.705854194 +0000 UTC m=+1413.710103166" watchObservedRunningTime="2026-02-02 11:20:36.712163992 +0000 UTC m=+1413.716412984" Feb 02 11:20:38 crc kubenswrapper[4925]: I0202 11:20:38.199190 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.083690 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-rj59c"] Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.085037 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rj59c" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.087590 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.087732 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.092681 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rj59c"] Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.117020 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.220399 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2crx\" (UniqueName: \"kubernetes.io/projected/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-kube-api-access-q2crx\") pod \"nova-cell0-cell-mapping-rj59c\" (UID: \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\") " pod="openstack/nova-cell0-cell-mapping-rj59c" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.220479 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-config-data\") pod \"nova-cell0-cell-mapping-rj59c\" (UID: \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\") " pod="openstack/nova-cell0-cell-mapping-rj59c" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.220561 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rj59c\" (UID: \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\") " pod="openstack/nova-cell0-cell-mapping-rj59c" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.220599 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-scripts\") pod \"nova-cell0-cell-mapping-rj59c\" (UID: \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\") " pod="openstack/nova-cell0-cell-mapping-rj59c" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.258620 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.260335 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.278944 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.325299 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rj59c\" (UID: \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\") " pod="openstack/nova-cell0-cell-mapping-rj59c" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.325357 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-scripts\") pod \"nova-cell0-cell-mapping-rj59c\" (UID: \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\") " pod="openstack/nova-cell0-cell-mapping-rj59c" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.325436 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2crx\" (UniqueName: \"kubernetes.io/projected/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-kube-api-access-q2crx\") pod \"nova-cell0-cell-mapping-rj59c\" (UID: \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\") " pod="openstack/nova-cell0-cell-mapping-rj59c" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.325467 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-config-data\") pod \"nova-cell0-cell-mapping-rj59c\" (UID: \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\") " pod="openstack/nova-cell0-cell-mapping-rj59c" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.331253 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rj59c\" (UID: \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\") " pod="openstack/nova-cell0-cell-mapping-rj59c" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.339534 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-scripts\") pod \"nova-cell0-cell-mapping-rj59c\" (UID: \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\") " pod="openstack/nova-cell0-cell-mapping-rj59c" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.344550 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-config-data\") pod \"nova-cell0-cell-mapping-rj59c\" (UID: \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\") " pod="openstack/nova-cell0-cell-mapping-rj59c" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.386408 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.391438 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2crx\" (UniqueName: \"kubernetes.io/projected/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-kube-api-access-q2crx\") pod \"nova-cell0-cell-mapping-rj59c\" (UID: \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\") " pod="openstack/nova-cell0-cell-mapping-rj59c" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.403569 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rj59c" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.428131 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kww42\" (UniqueName: \"kubernetes.io/projected/ab8d7782-81fa-4d33-b995-234a277b2056-kube-api-access-kww42\") pod \"nova-api-0\" (UID: \"ab8d7782-81fa-4d33-b995-234a277b2056\") " pod="openstack/nova-api-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.428243 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8d7782-81fa-4d33-b995-234a277b2056-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ab8d7782-81fa-4d33-b995-234a277b2056\") " pod="openstack/nova-api-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.428268 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab8d7782-81fa-4d33-b995-234a277b2056-logs\") pod \"nova-api-0\" (UID: \"ab8d7782-81fa-4d33-b995-234a277b2056\") " pod="openstack/nova-api-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.428306 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8d7782-81fa-4d33-b995-234a277b2056-config-data\") pod \"nova-api-0\" (UID: \"ab8d7782-81fa-4d33-b995-234a277b2056\") " pod="openstack/nova-api-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.438142 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.439273 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.444548 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.455554 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.457327 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.473488 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.476959 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.502381 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.533218 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kww42\" (UniqueName: \"kubernetes.io/projected/ab8d7782-81fa-4d33-b995-234a277b2056-kube-api-access-kww42\") pod \"nova-api-0\" (UID: \"ab8d7782-81fa-4d33-b995-234a277b2056\") " pod="openstack/nova-api-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.533298 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107183ad-93d2-41f6-ae04-35f0d583befa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"107183ad-93d2-41f6-ae04-35f0d583befa\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.533330 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhs4n\" (UniqueName: \"kubernetes.io/projected/107183ad-93d2-41f6-ae04-35f0d583befa-kube-api-access-nhs4n\") pod \"nova-scheduler-0\" (UID: \"107183ad-93d2-41f6-ae04-35f0d583befa\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.533414 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8d7782-81fa-4d33-b995-234a277b2056-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ab8d7782-81fa-4d33-b995-234a277b2056\") " pod="openstack/nova-api-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.533439 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab8d7782-81fa-4d33-b995-234a277b2056-logs\") pod \"nova-api-0\" (UID: \"ab8d7782-81fa-4d33-b995-234a277b2056\") " pod="openstack/nova-api-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.533472 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/107183ad-93d2-41f6-ae04-35f0d583befa-config-data\") pod \"nova-scheduler-0\" (UID: \"107183ad-93d2-41f6-ae04-35f0d583befa\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.533514 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8d7782-81fa-4d33-b995-234a277b2056-config-data\") pod \"nova-api-0\" (UID: \"ab8d7782-81fa-4d33-b995-234a277b2056\") " pod="openstack/nova-api-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.537519 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.537778 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab8d7782-81fa-4d33-b995-234a277b2056-logs\") pod \"nova-api-0\" (UID: \"ab8d7782-81fa-4d33-b995-234a277b2056\") " pod="openstack/nova-api-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.538870 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.544227 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.553242 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8d7782-81fa-4d33-b995-234a277b2056-config-data\") pod \"nova-api-0\" (UID: \"ab8d7782-81fa-4d33-b995-234a277b2056\") " pod="openstack/nova-api-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.557607 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8d7782-81fa-4d33-b995-234a277b2056-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ab8d7782-81fa-4d33-b995-234a277b2056\") " pod="openstack/nova-api-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.559947 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.563844 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kww42\" (UniqueName: \"kubernetes.io/projected/ab8d7782-81fa-4d33-b995-234a277b2056-kube-api-access-kww42\") pod \"nova-api-0\" (UID: \"ab8d7782-81fa-4d33-b995-234a277b2056\") " pod="openstack/nova-api-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.606964 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-z5nww"] Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.608670 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.631282 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-z5nww"] Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.640532 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fb740d-7a25-4acc-b004-648500772071-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"38fb740d-7a25-4acc-b004-648500772071\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.640598 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a25867-5d37-47f7-b2e9-edcb943f6480-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5a25867-5d37-47f7-b2e9-edcb943f6480\") " pod="openstack/nova-metadata-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.640635 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5jt9\" (UniqueName: \"kubernetes.io/projected/d5a25867-5d37-47f7-b2e9-edcb943f6480-kube-api-access-n5jt9\") pod \"nova-metadata-0\" (UID: \"d5a25867-5d37-47f7-b2e9-edcb943f6480\") " pod="openstack/nova-metadata-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.640655 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a25867-5d37-47f7-b2e9-edcb943f6480-config-data\") pod \"nova-metadata-0\" (UID: \"d5a25867-5d37-47f7-b2e9-edcb943f6480\") " pod="openstack/nova-metadata-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.640691 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107183ad-93d2-41f6-ae04-35f0d583befa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"107183ad-93d2-41f6-ae04-35f0d583befa\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.640737 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhs4n\" (UniqueName: \"kubernetes.io/projected/107183ad-93d2-41f6-ae04-35f0d583befa-kube-api-access-nhs4n\") pod \"nova-scheduler-0\" (UID: \"107183ad-93d2-41f6-ae04-35f0d583befa\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.640782 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5a25867-5d37-47f7-b2e9-edcb943f6480-logs\") pod \"nova-metadata-0\" (UID: \"d5a25867-5d37-47f7-b2e9-edcb943f6480\") " pod="openstack/nova-metadata-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.640835 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/107183ad-93d2-41f6-ae04-35f0d583befa-config-data\") pod \"nova-scheduler-0\" (UID: \"107183ad-93d2-41f6-ae04-35f0d583befa\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.641120 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38fb740d-7a25-4acc-b004-648500772071-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"38fb740d-7a25-4acc-b004-648500772071\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.641149 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9vfn\" (UniqueName: \"kubernetes.io/projected/38fb740d-7a25-4acc-b004-648500772071-kube-api-access-n9vfn\") pod \"nova-cell1-novncproxy-0\" (UID: \"38fb740d-7a25-4acc-b004-648500772071\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.644831 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107183ad-93d2-41f6-ae04-35f0d583befa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"107183ad-93d2-41f6-ae04-35f0d583befa\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.658356 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/107183ad-93d2-41f6-ae04-35f0d583befa-config-data\") pod \"nova-scheduler-0\" (UID: \"107183ad-93d2-41f6-ae04-35f0d583befa\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.665594 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhs4n\" (UniqueName: \"kubernetes.io/projected/107183ad-93d2-41f6-ae04-35f0d583befa-kube-api-access-nhs4n\") pod \"nova-scheduler-0\" (UID: \"107183ad-93d2-41f6-ae04-35f0d583befa\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.743640 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf24h\" (UniqueName: \"kubernetes.io/projected/674b5bd9-5722-4535-9f0e-931c61ed14d9-kube-api-access-bf24h\") pod \"dnsmasq-dns-566b5b7845-z5nww\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.743697 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fb740d-7a25-4acc-b004-648500772071-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"38fb740d-7a25-4acc-b004-648500772071\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.743745 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-z5nww\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.743872 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a25867-5d37-47f7-b2e9-edcb943f6480-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5a25867-5d37-47f7-b2e9-edcb943f6480\") " pod="openstack/nova-metadata-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.743968 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a25867-5d37-47f7-b2e9-edcb943f6480-config-data\") pod \"nova-metadata-0\" (UID: \"d5a25867-5d37-47f7-b2e9-edcb943f6480\") " pod="openstack/nova-metadata-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.743990 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5jt9\" (UniqueName: \"kubernetes.io/projected/d5a25867-5d37-47f7-b2e9-edcb943f6480-kube-api-access-n5jt9\") pod \"nova-metadata-0\" (UID: \"d5a25867-5d37-47f7-b2e9-edcb943f6480\") " pod="openstack/nova-metadata-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.744048 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-z5nww\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.744137 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-config\") pod \"dnsmasq-dns-566b5b7845-z5nww\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.744225 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5a25867-5d37-47f7-b2e9-edcb943f6480-logs\") pod \"nova-metadata-0\" (UID: \"d5a25867-5d37-47f7-b2e9-edcb943f6480\") " pod="openstack/nova-metadata-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.744357 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38fb740d-7a25-4acc-b004-648500772071-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"38fb740d-7a25-4acc-b004-648500772071\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.744399 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9vfn\" (UniqueName: \"kubernetes.io/projected/38fb740d-7a25-4acc-b004-648500772071-kube-api-access-n9vfn\") pod \"nova-cell1-novncproxy-0\" (UID: \"38fb740d-7a25-4acc-b004-648500772071\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.744436 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-dns-svc\") pod \"dnsmasq-dns-566b5b7845-z5nww\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.745197 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5a25867-5d37-47f7-b2e9-edcb943f6480-logs\") pod \"nova-metadata-0\" (UID: \"d5a25867-5d37-47f7-b2e9-edcb943f6480\") " pod="openstack/nova-metadata-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.747826 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fb740d-7a25-4acc-b004-648500772071-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"38fb740d-7a25-4acc-b004-648500772071\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.750535 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a25867-5d37-47f7-b2e9-edcb943f6480-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5a25867-5d37-47f7-b2e9-edcb943f6480\") " pod="openstack/nova-metadata-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.753650 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38fb740d-7a25-4acc-b004-648500772071-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"38fb740d-7a25-4acc-b004-648500772071\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.753810 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a25867-5d37-47f7-b2e9-edcb943f6480-config-data\") pod \"nova-metadata-0\" (UID: \"d5a25867-5d37-47f7-b2e9-edcb943f6480\") " pod="openstack/nova-metadata-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.764949 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5jt9\" (UniqueName: \"kubernetes.io/projected/d5a25867-5d37-47f7-b2e9-edcb943f6480-kube-api-access-n5jt9\") pod \"nova-metadata-0\" (UID: \"d5a25867-5d37-47f7-b2e9-edcb943f6480\") " pod="openstack/nova-metadata-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.765687 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9vfn\" (UniqueName: \"kubernetes.io/projected/38fb740d-7a25-4acc-b004-648500772071-kube-api-access-n9vfn\") pod \"nova-cell1-novncproxy-0\" (UID: \"38fb740d-7a25-4acc-b004-648500772071\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.846421 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-dns-svc\") pod \"dnsmasq-dns-566b5b7845-z5nww\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.846469 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf24h\" (UniqueName: \"kubernetes.io/projected/674b5bd9-5722-4535-9f0e-931c61ed14d9-kube-api-access-bf24h\") pod \"dnsmasq-dns-566b5b7845-z5nww\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.846527 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-z5nww\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.846708 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-z5nww\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.846789 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-config\") pod \"dnsmasq-dns-566b5b7845-z5nww\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.847332 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-z5nww\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.848117 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-config\") pod \"dnsmasq-dns-566b5b7845-z5nww\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.848408 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-z5nww\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.848527 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-dns-svc\") pod \"dnsmasq-dns-566b5b7845-z5nww\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.854262 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.867866 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf24h\" (UniqueName: \"kubernetes.io/projected/674b5bd9-5722-4535-9f0e-931c61ed14d9-kube-api-access-bf24h\") pod \"dnsmasq-dns-566b5b7845-z5nww\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.899185 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.932251 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.968587 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:20:39 crc kubenswrapper[4925]: I0202 11:20:39.995858 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.018871 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rj59c"] Feb 02 11:20:40 crc kubenswrapper[4925]: W0202 11:20:40.039649 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23811ff6_75a2_4ea6_9ebb_bbca86b5cb38.slice/crio-6b463613462584bde74029326354b15f4d82910ea8ad6fcd367bc9e2502241a8 WatchSource:0}: Error finding container 6b463613462584bde74029326354b15f4d82910ea8ad6fcd367bc9e2502241a8: Status 404 returned error can't find the container with id 6b463613462584bde74029326354b15f4d82910ea8ad6fcd367bc9e2502241a8 Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.347176 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.375404 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w48pz"] Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.376559 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-w48pz" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.378583 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.381412 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.393600 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w48pz"] Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.458925 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-config-data\") pod \"nova-cell1-conductor-db-sync-w48pz\" (UID: \"c5e37af9-8c2c-4349-8496-4af3ce643c26\") " pod="openstack/nova-cell1-conductor-db-sync-w48pz" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.458999 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czh7h\" (UniqueName: \"kubernetes.io/projected/c5e37af9-8c2c-4349-8496-4af3ce643c26-kube-api-access-czh7h\") pod \"nova-cell1-conductor-db-sync-w48pz\" (UID: \"c5e37af9-8c2c-4349-8496-4af3ce643c26\") " pod="openstack/nova-cell1-conductor-db-sync-w48pz" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.459020 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-scripts\") pod \"nova-cell1-conductor-db-sync-w48pz\" (UID: \"c5e37af9-8c2c-4349-8496-4af3ce643c26\") " pod="openstack/nova-cell1-conductor-db-sync-w48pz" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.459192 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-w48pz\" (UID: \"c5e37af9-8c2c-4349-8496-4af3ce643c26\") " pod="openstack/nova-cell1-conductor-db-sync-w48pz" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.555277 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.561019 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-w48pz\" (UID: \"c5e37af9-8c2c-4349-8496-4af3ce643c26\") " pod="openstack/nova-cell1-conductor-db-sync-w48pz" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.561127 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-config-data\") pod \"nova-cell1-conductor-db-sync-w48pz\" (UID: \"c5e37af9-8c2c-4349-8496-4af3ce643c26\") " pod="openstack/nova-cell1-conductor-db-sync-w48pz" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.561179 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czh7h\" (UniqueName: \"kubernetes.io/projected/c5e37af9-8c2c-4349-8496-4af3ce643c26-kube-api-access-czh7h\") pod \"nova-cell1-conductor-db-sync-w48pz\" (UID: \"c5e37af9-8c2c-4349-8496-4af3ce643c26\") " pod="openstack/nova-cell1-conductor-db-sync-w48pz" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.561212 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-scripts\") pod \"nova-cell1-conductor-db-sync-w48pz\" (UID: \"c5e37af9-8c2c-4349-8496-4af3ce643c26\") " pod="openstack/nova-cell1-conductor-db-sync-w48pz" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.567196 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-config-data\") pod \"nova-cell1-conductor-db-sync-w48pz\" (UID: \"c5e37af9-8c2c-4349-8496-4af3ce643c26\") " pod="openstack/nova-cell1-conductor-db-sync-w48pz" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.569654 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-w48pz\" (UID: \"c5e37af9-8c2c-4349-8496-4af3ce643c26\") " pod="openstack/nova-cell1-conductor-db-sync-w48pz" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.573488 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-scripts\") pod \"nova-cell1-conductor-db-sync-w48pz\" (UID: \"c5e37af9-8c2c-4349-8496-4af3ce643c26\") " pod="openstack/nova-cell1-conductor-db-sync-w48pz" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.582097 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.593249 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czh7h\" (UniqueName: \"kubernetes.io/projected/c5e37af9-8c2c-4349-8496-4af3ce643c26-kube-api-access-czh7h\") pod \"nova-cell1-conductor-db-sync-w48pz\" (UID: \"c5e37af9-8c2c-4349-8496-4af3ce643c26\") " pod="openstack/nova-cell1-conductor-db-sync-w48pz" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.777556 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-w48pz" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.782407 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rj59c" event={"ID":"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38","Type":"ContainerStarted","Data":"d1741f20250c77558b3d93b3bbc5ab72807b100e5b7c7a0c2cafeac50c50af7e"} Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.782465 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rj59c" event={"ID":"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38","Type":"ContainerStarted","Data":"6b463613462584bde74029326354b15f4d82910ea8ad6fcd367bc9e2502241a8"} Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.796310 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab8d7782-81fa-4d33-b995-234a277b2056","Type":"ContainerStarted","Data":"eb5349f3e16a7f067350f48a6d33062ec91e804372f7e8892bb22c625005d37d"} Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.800733 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-z5nww"] Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.819319 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5a25867-5d37-47f7-b2e9-edcb943f6480","Type":"ContainerStarted","Data":"5f6b502f763a70e7b1c508c1f70dfdeebb21322611001cd7b2cf8535a24330c8"} Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.831153 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.834571 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-rj59c" podStartSLOduration=1.834554681 podStartE2EDuration="1.834554681s" podCreationTimestamp="2026-02-02 11:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:20:40.833400061 +0000 UTC m=+1417.837649043" watchObservedRunningTime="2026-02-02 11:20:40.834554681 +0000 UTC m=+1417.838803643" Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.843160 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"107183ad-93d2-41f6-ae04-35f0d583befa","Type":"ContainerStarted","Data":"19f91cc4d9f94b784aeb7af05d7caba7defdd3a0e2c86336d9ca9ef2af5b8bca"} Feb 02 11:20:40 crc kubenswrapper[4925]: I0202 11:20:40.933129 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:41 crc kubenswrapper[4925]: I0202 11:20:41.393379 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w48pz"] Feb 02 11:20:41 crc kubenswrapper[4925]: I0202 11:20:41.861559 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"38fb740d-7a25-4acc-b004-648500772071","Type":"ContainerStarted","Data":"37db8534654cf7b4d97ff919fff29fc9edc189692a6ce088ca7a9b057492721d"} Feb 02 11:20:41 crc kubenswrapper[4925]: I0202 11:20:41.863873 4925 generic.go:334] "Generic (PLEG): container finished" podID="674b5bd9-5722-4535-9f0e-931c61ed14d9" containerID="d8af2619e76a4bd4bd56f0ce28b15dfaf14b64cde2602776f69e14616e15f768" exitCode=0 Feb 02 11:20:41 crc kubenswrapper[4925]: I0202 11:20:41.863937 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-z5nww" event={"ID":"674b5bd9-5722-4535-9f0e-931c61ed14d9","Type":"ContainerDied","Data":"d8af2619e76a4bd4bd56f0ce28b15dfaf14b64cde2602776f69e14616e15f768"} Feb 02 11:20:41 crc kubenswrapper[4925]: I0202 11:20:41.863971 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-z5nww" event={"ID":"674b5bd9-5722-4535-9f0e-931c61ed14d9","Type":"ContainerStarted","Data":"f9ff24ccc6ce263b941c1e49b17bd453c8ff42dcc33028e1e171e93a7536f75c"} Feb 02 11:20:41 crc kubenswrapper[4925]: I0202 11:20:41.868563 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-w48pz" event={"ID":"c5e37af9-8c2c-4349-8496-4af3ce643c26","Type":"ContainerStarted","Data":"26722a9ff8843dbf64103f97b7e1c266525c9e970107164e240fb31472bc3990"} Feb 02 11:20:41 crc kubenswrapper[4925]: I0202 11:20:41.868600 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-w48pz" event={"ID":"c5e37af9-8c2c-4349-8496-4af3ce643c26","Type":"ContainerStarted","Data":"4170f01440cb398b60e472e49bdaa6a3741685b1e4b40acd078f2186aac8981c"} Feb 02 11:20:41 crc kubenswrapper[4925]: I0202 11:20:41.903262 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-w48pz" podStartSLOduration=1.903238771 podStartE2EDuration="1.903238771s" podCreationTimestamp="2026-02-02 11:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:20:41.899996964 +0000 UTC m=+1418.904245926" watchObservedRunningTime="2026-02-02 11:20:41.903238771 +0000 UTC m=+1418.907487743" Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.287345 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.337753 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.393482 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7755c4bbbc-qkg7f" Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.398436 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.398490 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.398549 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.399152 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"66621d3a93bf4a19f7e5b6564542e797798b65c2f056111ba9523d20399b11ef"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.399203 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://66621d3a93bf4a19f7e5b6564542e797798b65c2f056111ba9523d20399b11ef" gracePeriod=600 Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.460480 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5958464848-b4clm"] Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.460798 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5958464848-b4clm" podUID="654b5fff-2101-4f2a-9cc1-1a001d28f425" containerName="neutron-api" containerID="cri-o://68a1a633280acf1c2d2483df656adb891bc37daa1a3ca94a10cd19000b73879f" gracePeriod=30 Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.461257 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5958464848-b4clm" podUID="654b5fff-2101-4f2a-9cc1-1a001d28f425" containerName="neutron-httpd" containerID="cri-o://eb1ca3101da3434c92e4496c45ed59803da2865d7f4bddf4d14ff50c0e7f38e1" gracePeriod=30 Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.888281 4925 generic.go:334] "Generic (PLEG): container finished" podID="654b5fff-2101-4f2a-9cc1-1a001d28f425" containerID="eb1ca3101da3434c92e4496c45ed59803da2865d7f4bddf4d14ff50c0e7f38e1" exitCode=0 Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.888335 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5958464848-b4clm" event={"ID":"654b5fff-2101-4f2a-9cc1-1a001d28f425","Type":"ContainerDied","Data":"eb1ca3101da3434c92e4496c45ed59803da2865d7f4bddf4d14ff50c0e7f38e1"} Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.892273 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="66621d3a93bf4a19f7e5b6564542e797798b65c2f056111ba9523d20399b11ef" exitCode=0 Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.892357 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"66621d3a93bf4a19f7e5b6564542e797798b65c2f056111ba9523d20399b11ef"} Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.892436 4925 scope.go:117] "RemoveContainer" containerID="dc20c1950a2aee33db5a561db4bbc78e34cfd4881473af054b6cd76fb628d232" Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.895240 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-z5nww" event={"ID":"674b5bd9-5722-4535-9f0e-931c61ed14d9","Type":"ContainerStarted","Data":"db31309208a72b81a6de9c310adef9ed5e2b4f57b9eb2fc9394c18c0b9c69c7b"} Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.895395 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:43 crc kubenswrapper[4925]: I0202 11:20:43.923809 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-z5nww" podStartSLOduration=4.923786972 podStartE2EDuration="4.923786972s" podCreationTimestamp="2026-02-02 11:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:20:43.916523658 +0000 UTC m=+1420.920772630" watchObservedRunningTime="2026-02-02 11:20:43.923786972 +0000 UTC m=+1420.928035944" Feb 02 11:20:44 crc kubenswrapper[4925]: I0202 11:20:44.372307 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 11:20:46 crc kubenswrapper[4925]: I0202 11:20:46.949512 4925 generic.go:334] "Generic (PLEG): container finished" podID="654b5fff-2101-4f2a-9cc1-1a001d28f425" containerID="68a1a633280acf1c2d2483df656adb891bc37daa1a3ca94a10cd19000b73879f" exitCode=0 Feb 02 11:20:46 crc kubenswrapper[4925]: I0202 11:20:46.949597 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5958464848-b4clm" event={"ID":"654b5fff-2101-4f2a-9cc1-1a001d28f425","Type":"ContainerDied","Data":"68a1a633280acf1c2d2483df656adb891bc37daa1a3ca94a10cd19000b73879f"} Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.577999 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.751124 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnx2v\" (UniqueName: \"kubernetes.io/projected/654b5fff-2101-4f2a-9cc1-1a001d28f425-kube-api-access-xnx2v\") pod \"654b5fff-2101-4f2a-9cc1-1a001d28f425\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.751577 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-combined-ca-bundle\") pod \"654b5fff-2101-4f2a-9cc1-1a001d28f425\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.751756 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-config\") pod \"654b5fff-2101-4f2a-9cc1-1a001d28f425\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.751835 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-httpd-config\") pod \"654b5fff-2101-4f2a-9cc1-1a001d28f425\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.751868 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-ovndb-tls-certs\") pod \"654b5fff-2101-4f2a-9cc1-1a001d28f425\" (UID: \"654b5fff-2101-4f2a-9cc1-1a001d28f425\") " Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.762687 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654b5fff-2101-4f2a-9cc1-1a001d28f425-kube-api-access-xnx2v" (OuterVolumeSpecName: "kube-api-access-xnx2v") pod "654b5fff-2101-4f2a-9cc1-1a001d28f425" (UID: "654b5fff-2101-4f2a-9cc1-1a001d28f425"). InnerVolumeSpecName "kube-api-access-xnx2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.777678 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "654b5fff-2101-4f2a-9cc1-1a001d28f425" (UID: "654b5fff-2101-4f2a-9cc1-1a001d28f425"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.838245 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "654b5fff-2101-4f2a-9cc1-1a001d28f425" (UID: "654b5fff-2101-4f2a-9cc1-1a001d28f425"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.839148 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-config" (OuterVolumeSpecName: "config") pod "654b5fff-2101-4f2a-9cc1-1a001d28f425" (UID: "654b5fff-2101-4f2a-9cc1-1a001d28f425"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.854734 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.854770 4925 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.854783 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnx2v\" (UniqueName: \"kubernetes.io/projected/654b5fff-2101-4f2a-9cc1-1a001d28f425-kube-api-access-xnx2v\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.854796 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.875500 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "654b5fff-2101-4f2a-9cc1-1a001d28f425" (UID: "654b5fff-2101-4f2a-9cc1-1a001d28f425"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.956388 4925 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/654b5fff-2101-4f2a-9cc1-1a001d28f425-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.961590 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"107183ad-93d2-41f6-ae04-35f0d583befa","Type":"ContainerStarted","Data":"9748f99e1083aa224abbd2165c80560ceae1e299a9c20c2f0f4666f935b2b6a1"} Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.966282 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5958464848-b4clm" event={"ID":"654b5fff-2101-4f2a-9cc1-1a001d28f425","Type":"ContainerDied","Data":"b4a03616d5db2fe43ff929d2b9715f22f4e1fb9edea78a3c50ee5a93b51a064f"} Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.966333 4925 scope.go:117] "RemoveContainer" containerID="eb1ca3101da3434c92e4496c45ed59803da2865d7f4bddf4d14ff50c0e7f38e1" Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.966456 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5958464848-b4clm" Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.970897 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab8d7782-81fa-4d33-b995-234a277b2056","Type":"ContainerStarted","Data":"86bcac6771065eee1b46f2e3684560533411bed503fe339d14cf30a9ed56a4fa"} Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.974473 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5a25867-5d37-47f7-b2e9-edcb943f6480","Type":"ContainerStarted","Data":"447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76"} Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.978970 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff"} Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.988045 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"38fb740d-7a25-4acc-b004-648500772071","Type":"ContainerStarted","Data":"ab9ccb90370780273fe6cc2939a57209b6acbe110f767f98f8fa60c7ba7f6312"} Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.988198 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="38fb740d-7a25-4acc-b004-648500772071" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://ab9ccb90370780273fe6cc2939a57209b6acbe110f767f98f8fa60c7ba7f6312" gracePeriod=30 Feb 02 11:20:47 crc kubenswrapper[4925]: I0202 11:20:47.996743 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.336910385 podStartE2EDuration="8.996721498s" podCreationTimestamp="2026-02-02 11:20:39 +0000 UTC" firstStartedPulling="2026-02-02 11:20:40.540358658 +0000 UTC m=+1417.544607620" lastFinishedPulling="2026-02-02 11:20:47.200169771 +0000 UTC m=+1424.204418733" observedRunningTime="2026-02-02 11:20:47.987292296 +0000 UTC m=+1424.991541278" watchObservedRunningTime="2026-02-02 11:20:47.996721498 +0000 UTC m=+1425.000970460" Feb 02 11:20:48 crc kubenswrapper[4925]: I0202 11:20:48.028143 4925 scope.go:117] "RemoveContainer" containerID="68a1a633280acf1c2d2483df656adb891bc37daa1a3ca94a10cd19000b73879f" Feb 02 11:20:48 crc kubenswrapper[4925]: I0202 11:20:48.045940 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5958464848-b4clm"] Feb 02 11:20:48 crc kubenswrapper[4925]: I0202 11:20:48.063516 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5958464848-b4clm"] Feb 02 11:20:48 crc kubenswrapper[4925]: I0202 11:20:48.068146 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.728070933 podStartE2EDuration="9.068100818s" podCreationTimestamp="2026-02-02 11:20:39 +0000 UTC" firstStartedPulling="2026-02-02 11:20:40.861551184 +0000 UTC m=+1417.865800146" lastFinishedPulling="2026-02-02 11:20:47.201581079 +0000 UTC m=+1424.205830031" observedRunningTime="2026-02-02 11:20:48.055332486 +0000 UTC m=+1425.059581458" watchObservedRunningTime="2026-02-02 11:20:48.068100818 +0000 UTC m=+1425.072349790" Feb 02 11:20:48 crc kubenswrapper[4925]: I0202 11:20:48.678595 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="654b5fff-2101-4f2a-9cc1-1a001d28f425" path="/var/lib/kubelet/pods/654b5fff-2101-4f2a-9cc1-1a001d28f425/volumes" Feb 02 11:20:48 crc kubenswrapper[4925]: I0202 11:20:48.998761 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5a25867-5d37-47f7-b2e9-edcb943f6480","Type":"ContainerStarted","Data":"375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29"} Feb 02 11:20:48 crc kubenswrapper[4925]: I0202 11:20:48.998866 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d5a25867-5d37-47f7-b2e9-edcb943f6480" containerName="nova-metadata-log" containerID="cri-o://447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76" gracePeriod=30 Feb 02 11:20:48 crc kubenswrapper[4925]: I0202 11:20:48.998902 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d5a25867-5d37-47f7-b2e9-edcb943f6480" containerName="nova-metadata-metadata" containerID="cri-o://375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29" gracePeriod=30 Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.002980 4925 generic.go:334] "Generic (PLEG): container finished" podID="23811ff6-75a2-4ea6-9ebb-bbca86b5cb38" containerID="d1741f20250c77558b3d93b3bbc5ab72807b100e5b7c7a0c2cafeac50c50af7e" exitCode=0 Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.003085 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rj59c" event={"ID":"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38","Type":"ContainerDied","Data":"d1741f20250c77558b3d93b3bbc5ab72807b100e5b7c7a0c2cafeac50c50af7e"} Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.006159 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab8d7782-81fa-4d33-b995-234a277b2056","Type":"ContainerStarted","Data":"b1e53f86089615ae05b4fc6ba187eb0d18879b2284520a8ec7eb3220f75511e9"} Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.024003 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.406218461 podStartE2EDuration="10.023978369s" podCreationTimestamp="2026-02-02 11:20:39 +0000 UTC" firstStartedPulling="2026-02-02 11:20:40.586244576 +0000 UTC m=+1417.590493538" lastFinishedPulling="2026-02-02 11:20:47.204004484 +0000 UTC m=+1424.208253446" observedRunningTime="2026-02-02 11:20:49.018346088 +0000 UTC m=+1426.022595060" watchObservedRunningTime="2026-02-02 11:20:49.023978369 +0000 UTC m=+1426.028227321" Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.048919 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.217969384 podStartE2EDuration="10.048899986s" podCreationTimestamp="2026-02-02 11:20:39 +0000 UTC" firstStartedPulling="2026-02-02 11:20:40.363501636 +0000 UTC m=+1417.367750598" lastFinishedPulling="2026-02-02 11:20:47.194432238 +0000 UTC m=+1424.198681200" observedRunningTime="2026-02-02 11:20:49.043295816 +0000 UTC m=+1426.047544788" watchObservedRunningTime="2026-02-02 11:20:49.048899986 +0000 UTC m=+1426.053148948" Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.574789 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.685317 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a25867-5d37-47f7-b2e9-edcb943f6480-combined-ca-bundle\") pod \"d5a25867-5d37-47f7-b2e9-edcb943f6480\" (UID: \"d5a25867-5d37-47f7-b2e9-edcb943f6480\") " Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.685404 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5a25867-5d37-47f7-b2e9-edcb943f6480-logs\") pod \"d5a25867-5d37-47f7-b2e9-edcb943f6480\" (UID: \"d5a25867-5d37-47f7-b2e9-edcb943f6480\") " Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.685426 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5jt9\" (UniqueName: \"kubernetes.io/projected/d5a25867-5d37-47f7-b2e9-edcb943f6480-kube-api-access-n5jt9\") pod \"d5a25867-5d37-47f7-b2e9-edcb943f6480\" (UID: \"d5a25867-5d37-47f7-b2e9-edcb943f6480\") " Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.685472 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a25867-5d37-47f7-b2e9-edcb943f6480-config-data\") pod \"d5a25867-5d37-47f7-b2e9-edcb943f6480\" (UID: \"d5a25867-5d37-47f7-b2e9-edcb943f6480\") " Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.685818 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5a25867-5d37-47f7-b2e9-edcb943f6480-logs" (OuterVolumeSpecName: "logs") pod "d5a25867-5d37-47f7-b2e9-edcb943f6480" (UID: "d5a25867-5d37-47f7-b2e9-edcb943f6480"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.685922 4925 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5a25867-5d37-47f7-b2e9-edcb943f6480-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.692207 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5a25867-5d37-47f7-b2e9-edcb943f6480-kube-api-access-n5jt9" (OuterVolumeSpecName: "kube-api-access-n5jt9") pod "d5a25867-5d37-47f7-b2e9-edcb943f6480" (UID: "d5a25867-5d37-47f7-b2e9-edcb943f6480"). InnerVolumeSpecName "kube-api-access-n5jt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.713575 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a25867-5d37-47f7-b2e9-edcb943f6480-config-data" (OuterVolumeSpecName: "config-data") pod "d5a25867-5d37-47f7-b2e9-edcb943f6480" (UID: "d5a25867-5d37-47f7-b2e9-edcb943f6480"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.721919 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a25867-5d37-47f7-b2e9-edcb943f6480-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5a25867-5d37-47f7-b2e9-edcb943f6480" (UID: "d5a25867-5d37-47f7-b2e9-edcb943f6480"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.787238 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a25867-5d37-47f7-b2e9-edcb943f6480-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.787272 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5jt9\" (UniqueName: \"kubernetes.io/projected/d5a25867-5d37-47f7-b2e9-edcb943f6480-kube-api-access-n5jt9\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.787284 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a25867-5d37-47f7-b2e9-edcb943f6480-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.855874 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.855935 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.900411 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.900466 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.932460 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 11:20:49 crc kubenswrapper[4925]: I0202 11:20:49.969389 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.000095 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.020299 4925 generic.go:334] "Generic (PLEG): container finished" podID="d5a25867-5d37-47f7-b2e9-edcb943f6480" containerID="375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29" exitCode=0 Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.020346 4925 generic.go:334] "Generic (PLEG): container finished" podID="d5a25867-5d37-47f7-b2e9-edcb943f6480" containerID="447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76" exitCode=143 Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.020359 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5a25867-5d37-47f7-b2e9-edcb943f6480","Type":"ContainerDied","Data":"375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29"} Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.020411 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5a25867-5d37-47f7-b2e9-edcb943f6480","Type":"ContainerDied","Data":"447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76"} Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.020431 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5a25867-5d37-47f7-b2e9-edcb943f6480","Type":"ContainerDied","Data":"5f6b502f763a70e7b1c508c1f70dfdeebb21322611001cd7b2cf8535a24330c8"} Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.020448 4925 scope.go:117] "RemoveContainer" containerID="375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.021206 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.084938 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.098215 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-64lxq"] Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.098459 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" podUID="18037f81-0d3a-411d-a0f3-275b5422276e" containerName="dnsmasq-dns" containerID="cri-o://039ce53c0eb82d28fa6ee176b0d559c8ddc0e460a41c60ea9c968202a475eaae" gracePeriod=10 Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.118323 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.128543 4925 scope.go:117] "RemoveContainer" containerID="447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.128695 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:20:50 crc kubenswrapper[4925]: E0202 11:20:50.129063 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a25867-5d37-47f7-b2e9-edcb943f6480" containerName="nova-metadata-metadata" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.129766 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a25867-5d37-47f7-b2e9-edcb943f6480" containerName="nova-metadata-metadata" Feb 02 11:20:50 crc kubenswrapper[4925]: E0202 11:20:50.129786 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a25867-5d37-47f7-b2e9-edcb943f6480" containerName="nova-metadata-log" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.129794 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a25867-5d37-47f7-b2e9-edcb943f6480" containerName="nova-metadata-log" Feb 02 11:20:50 crc kubenswrapper[4925]: E0202 11:20:50.129833 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654b5fff-2101-4f2a-9cc1-1a001d28f425" containerName="neutron-httpd" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.129842 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="654b5fff-2101-4f2a-9cc1-1a001d28f425" containerName="neutron-httpd" Feb 02 11:20:50 crc kubenswrapper[4925]: E0202 11:20:50.129859 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654b5fff-2101-4f2a-9cc1-1a001d28f425" containerName="neutron-api" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.129867 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="654b5fff-2101-4f2a-9cc1-1a001d28f425" containerName="neutron-api" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.130123 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="654b5fff-2101-4f2a-9cc1-1a001d28f425" containerName="neutron-httpd" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.130176 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5a25867-5d37-47f7-b2e9-edcb943f6480" containerName="nova-metadata-metadata" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.130191 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5a25867-5d37-47f7-b2e9-edcb943f6480" containerName="nova-metadata-log" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.130206 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="654b5fff-2101-4f2a-9cc1-1a001d28f425" containerName="neutron-api" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.138645 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.143259 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.143477 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.143649 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.154288 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.230799 4925 scope.go:117] "RemoveContainer" containerID="375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29" Feb 02 11:20:50 crc kubenswrapper[4925]: E0202 11:20:50.236208 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29\": container with ID starting with 375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29 not found: ID does not exist" containerID="375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.236301 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29"} err="failed to get container status \"375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29\": rpc error: code = NotFound desc = could not find container \"375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29\": container with ID starting with 375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29 not found: ID does not exist" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.236353 4925 scope.go:117] "RemoveContainer" containerID="447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76" Feb 02 11:20:50 crc kubenswrapper[4925]: E0202 11:20:50.236860 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76\": container with ID starting with 447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76 not found: ID does not exist" containerID="447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.237850 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76"} err="failed to get container status \"447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76\": rpc error: code = NotFound desc = could not find container \"447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76\": container with ID starting with 447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76 not found: ID does not exist" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.237874 4925 scope.go:117] "RemoveContainer" containerID="375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.238688 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29"} err="failed to get container status \"375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29\": rpc error: code = NotFound desc = could not find container \"375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29\": container with ID starting with 375cc4d0f63f3d7503f93d61509dcdb6a6dc7216a50763522e01c57ea2e26f29 not found: ID does not exist" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.238717 4925 scope.go:117] "RemoveContainer" containerID="447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.239167 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76"} err="failed to get container status \"447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76\": rpc error: code = NotFound desc = could not find container \"447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76\": container with ID starting with 447d263f59d37e1ec8b172221c5f371e2930e9e45fa162686adbce2d9651ed76 not found: ID does not exist" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.300261 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.300339 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b93f422-104c-49a6-af4c-d8c88f057a22-logs\") pod \"nova-metadata-0\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.300380 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.300430 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-config-data\") pod \"nova-metadata-0\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.300527 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2l96\" (UniqueName: \"kubernetes.io/projected/5b93f422-104c-49a6-af4c-d8c88f057a22-kube-api-access-k2l96\") pod \"nova-metadata-0\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.402018 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.402164 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b93f422-104c-49a6-af4c-d8c88f057a22-logs\") pod \"nova-metadata-0\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.402215 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.402256 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-config-data\") pod \"nova-metadata-0\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.402345 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2l96\" (UniqueName: \"kubernetes.io/projected/5b93f422-104c-49a6-af4c-d8c88f057a22-kube-api-access-k2l96\") pod \"nova-metadata-0\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.403397 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b93f422-104c-49a6-af4c-d8c88f057a22-logs\") pod \"nova-metadata-0\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.407205 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-config-data\") pod \"nova-metadata-0\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.407612 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.409658 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.443814 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2l96\" (UniqueName: \"kubernetes.io/projected/5b93f422-104c-49a6-af4c-d8c88f057a22-kube-api-access-k2l96\") pod \"nova-metadata-0\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.578704 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.645529 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rj59c" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.687152 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5a25867-5d37-47f7-b2e9-edcb943f6480" path="/var/lib/kubelet/pods/d5a25867-5d37-47f7-b2e9-edcb943f6480/volumes" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.813198 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-combined-ca-bundle\") pod \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\" (UID: \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\") " Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.813340 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2crx\" (UniqueName: \"kubernetes.io/projected/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-kube-api-access-q2crx\") pod \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\" (UID: \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\") " Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.813397 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-scripts\") pod \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\" (UID: \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\") " Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.813444 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-config-data\") pod \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\" (UID: \"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38\") " Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.820367 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-scripts" (OuterVolumeSpecName: "scripts") pod "23811ff6-75a2-4ea6-9ebb-bbca86b5cb38" (UID: "23811ff6-75a2-4ea6-9ebb-bbca86b5cb38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.821000 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-kube-api-access-q2crx" (OuterVolumeSpecName: "kube-api-access-q2crx") pod "23811ff6-75a2-4ea6-9ebb-bbca86b5cb38" (UID: "23811ff6-75a2-4ea6-9ebb-bbca86b5cb38"). InnerVolumeSpecName "kube-api-access-q2crx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.830968 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.856295 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-config-data" (OuterVolumeSpecName: "config-data") pod "23811ff6-75a2-4ea6-9ebb-bbca86b5cb38" (UID: "23811ff6-75a2-4ea6-9ebb-bbca86b5cb38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.876118 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23811ff6-75a2-4ea6-9ebb-bbca86b5cb38" (UID: "23811ff6-75a2-4ea6-9ebb-bbca86b5cb38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.915169 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-dns-svc\") pod \"18037f81-0d3a-411d-a0f3-275b5422276e\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.915274 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-ovsdbserver-sb\") pod \"18037f81-0d3a-411d-a0f3-275b5422276e\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.915377 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-config\") pod \"18037f81-0d3a-411d-a0f3-275b5422276e\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.915434 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zlnl\" (UniqueName: \"kubernetes.io/projected/18037f81-0d3a-411d-a0f3-275b5422276e-kube-api-access-9zlnl\") pod \"18037f81-0d3a-411d-a0f3-275b5422276e\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.915501 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-ovsdbserver-nb\") pod \"18037f81-0d3a-411d-a0f3-275b5422276e\" (UID: \"18037f81-0d3a-411d-a0f3-275b5422276e\") " Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.915836 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.915859 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.915873 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2crx\" (UniqueName: \"kubernetes.io/projected/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-kube-api-access-q2crx\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.915883 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.937360 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18037f81-0d3a-411d-a0f3-275b5422276e-kube-api-access-9zlnl" (OuterVolumeSpecName: "kube-api-access-9zlnl") pod "18037f81-0d3a-411d-a0f3-275b5422276e" (UID: "18037f81-0d3a-411d-a0f3-275b5422276e"). InnerVolumeSpecName "kube-api-access-9zlnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.938632 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ab8d7782-81fa-4d33-b995-234a277b2056" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.939024 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ab8d7782-81fa-4d33-b995-234a277b2056" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.980201 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "18037f81-0d3a-411d-a0f3-275b5422276e" (UID: "18037f81-0d3a-411d-a0f3-275b5422276e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.981221 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "18037f81-0d3a-411d-a0f3-275b5422276e" (UID: "18037f81-0d3a-411d-a0f3-275b5422276e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.987736 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-config" (OuterVolumeSpecName: "config") pod "18037f81-0d3a-411d-a0f3-275b5422276e" (UID: "18037f81-0d3a-411d-a0f3-275b5422276e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:20:50 crc kubenswrapper[4925]: I0202 11:20:50.993780 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "18037f81-0d3a-411d-a0f3-275b5422276e" (UID: "18037f81-0d3a-411d-a0f3-275b5422276e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.018203 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.018242 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zlnl\" (UniqueName: \"kubernetes.io/projected/18037f81-0d3a-411d-a0f3-275b5422276e-kube-api-access-9zlnl\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.018258 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.018269 4925 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.018281 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18037f81-0d3a-411d-a0f3-275b5422276e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.031090 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rj59c" Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.030992 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rj59c" event={"ID":"23811ff6-75a2-4ea6-9ebb-bbca86b5cb38","Type":"ContainerDied","Data":"6b463613462584bde74029326354b15f4d82910ea8ad6fcd367bc9e2502241a8"} Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.033025 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b463613462584bde74029326354b15f4d82910ea8ad6fcd367bc9e2502241a8" Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.034258 4925 generic.go:334] "Generic (PLEG): container finished" podID="18037f81-0d3a-411d-a0f3-275b5422276e" containerID="039ce53c0eb82d28fa6ee176b0d559c8ddc0e460a41c60ea9c968202a475eaae" exitCode=0 Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.034430 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" event={"ID":"18037f81-0d3a-411d-a0f3-275b5422276e","Type":"ContainerDied","Data":"039ce53c0eb82d28fa6ee176b0d559c8ddc0e460a41c60ea9c968202a475eaae"} Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.034617 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" event={"ID":"18037f81-0d3a-411d-a0f3-275b5422276e","Type":"ContainerDied","Data":"e10517e4fd5146d1ef8af9ecc1194299996096c6e180cdaa4a52c21bf0d2cff1"} Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.034656 4925 scope.go:117] "RemoveContainer" containerID="039ce53c0eb82d28fa6ee176b0d559c8ddc0e460a41c60ea9c968202a475eaae" Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.034719 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-64lxq" Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.057589 4925 scope.go:117] "RemoveContainer" containerID="2fb159b4b93a0ebd8d67d252e47fc14e85a66a47dda4cab4c3413ce4521e5373" Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.075412 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-64lxq"] Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.088033 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-64lxq"] Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.107287 4925 scope.go:117] "RemoveContainer" containerID="039ce53c0eb82d28fa6ee176b0d559c8ddc0e460a41c60ea9c968202a475eaae" Feb 02 11:20:51 crc kubenswrapper[4925]: E0202 11:20:51.107839 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"039ce53c0eb82d28fa6ee176b0d559c8ddc0e460a41c60ea9c968202a475eaae\": container with ID starting with 039ce53c0eb82d28fa6ee176b0d559c8ddc0e460a41c60ea9c968202a475eaae not found: ID does not exist" containerID="039ce53c0eb82d28fa6ee176b0d559c8ddc0e460a41c60ea9c968202a475eaae" Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.107884 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"039ce53c0eb82d28fa6ee176b0d559c8ddc0e460a41c60ea9c968202a475eaae"} err="failed to get container status \"039ce53c0eb82d28fa6ee176b0d559c8ddc0e460a41c60ea9c968202a475eaae\": rpc error: code = NotFound desc = could not find container \"039ce53c0eb82d28fa6ee176b0d559c8ddc0e460a41c60ea9c968202a475eaae\": container with ID starting with 039ce53c0eb82d28fa6ee176b0d559c8ddc0e460a41c60ea9c968202a475eaae not found: ID does not exist" Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.107910 4925 scope.go:117] "RemoveContainer" containerID="2fb159b4b93a0ebd8d67d252e47fc14e85a66a47dda4cab4c3413ce4521e5373" Feb 02 11:20:51 crc kubenswrapper[4925]: E0202 11:20:51.108240 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fb159b4b93a0ebd8d67d252e47fc14e85a66a47dda4cab4c3413ce4521e5373\": container with ID starting with 2fb159b4b93a0ebd8d67d252e47fc14e85a66a47dda4cab4c3413ce4521e5373 not found: ID does not exist" containerID="2fb159b4b93a0ebd8d67d252e47fc14e85a66a47dda4cab4c3413ce4521e5373" Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.108264 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fb159b4b93a0ebd8d67d252e47fc14e85a66a47dda4cab4c3413ce4521e5373"} err="failed to get container status \"2fb159b4b93a0ebd8d67d252e47fc14e85a66a47dda4cab4c3413ce4521e5373\": rpc error: code = NotFound desc = could not find container \"2fb159b4b93a0ebd8d67d252e47fc14e85a66a47dda4cab4c3413ce4521e5373\": container with ID starting with 2fb159b4b93a0ebd8d67d252e47fc14e85a66a47dda4cab4c3413ce4521e5373 not found: ID does not exist" Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.235514 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.290611 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.328499 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:20:51 crc kubenswrapper[4925]: I0202 11:20:51.491806 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.048779 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5b93f422-104c-49a6-af4c-d8c88f057a22","Type":"ContainerStarted","Data":"453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0"} Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.048822 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5b93f422-104c-49a6-af4c-d8c88f057a22","Type":"ContainerStarted","Data":"6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880"} Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.048833 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5b93f422-104c-49a6-af4c-d8c88f057a22","Type":"ContainerStarted","Data":"f8cb4db161f19e798363e2c7a1c4635a525b36b3683a6ee44168f236e7bdeb0a"} Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.048881 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5b93f422-104c-49a6-af4c-d8c88f057a22" containerName="nova-metadata-log" containerID="cri-o://6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880" gracePeriod=30 Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.049146 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5b93f422-104c-49a6-af4c-d8c88f057a22" containerName="nova-metadata-metadata" containerID="cri-o://453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0" gracePeriod=30 Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.055857 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ab8d7782-81fa-4d33-b995-234a277b2056" containerName="nova-api-log" containerID="cri-o://86bcac6771065eee1b46f2e3684560533411bed503fe339d14cf30a9ed56a4fa" gracePeriod=30 Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.056527 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ab8d7782-81fa-4d33-b995-234a277b2056" containerName="nova-api-api" containerID="cri-o://b1e53f86089615ae05b4fc6ba187eb0d18879b2284520a8ec7eb3220f75511e9" gracePeriod=30 Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.074204 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.074183505 podStartE2EDuration="2.074183505s" podCreationTimestamp="2026-02-02 11:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:20:52.066615853 +0000 UTC m=+1429.070864835" watchObservedRunningTime="2026-02-02 11:20:52.074183505 +0000 UTC m=+1429.078432467" Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.679897 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18037f81-0d3a-411d-a0f3-275b5422276e" path="/var/lib/kubelet/pods/18037f81-0d3a-411d-a0f3-275b5422276e/volumes" Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.817517 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.949175 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-nova-metadata-tls-certs\") pod \"5b93f422-104c-49a6-af4c-d8c88f057a22\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.949735 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-config-data\") pod \"5b93f422-104c-49a6-af4c-d8c88f057a22\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.949949 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b93f422-104c-49a6-af4c-d8c88f057a22-logs\") pod \"5b93f422-104c-49a6-af4c-d8c88f057a22\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.950044 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-combined-ca-bundle\") pod \"5b93f422-104c-49a6-af4c-d8c88f057a22\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.950144 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2l96\" (UniqueName: \"kubernetes.io/projected/5b93f422-104c-49a6-af4c-d8c88f057a22-kube-api-access-k2l96\") pod \"5b93f422-104c-49a6-af4c-d8c88f057a22\" (UID: \"5b93f422-104c-49a6-af4c-d8c88f057a22\") " Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.952399 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b93f422-104c-49a6-af4c-d8c88f057a22-logs" (OuterVolumeSpecName: "logs") pod "5b93f422-104c-49a6-af4c-d8c88f057a22" (UID: "5b93f422-104c-49a6-af4c-d8c88f057a22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.963147 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b93f422-104c-49a6-af4c-d8c88f057a22-kube-api-access-k2l96" (OuterVolumeSpecName: "kube-api-access-k2l96") pod "5b93f422-104c-49a6-af4c-d8c88f057a22" (UID: "5b93f422-104c-49a6-af4c-d8c88f057a22"). InnerVolumeSpecName "kube-api-access-k2l96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.982147 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b93f422-104c-49a6-af4c-d8c88f057a22" (UID: "5b93f422-104c-49a6-af4c-d8c88f057a22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:52 crc kubenswrapper[4925]: I0202 11:20:52.990190 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-config-data" (OuterVolumeSpecName: "config-data") pod "5b93f422-104c-49a6-af4c-d8c88f057a22" (UID: "5b93f422-104c-49a6-af4c-d8c88f057a22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.007149 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5b93f422-104c-49a6-af4c-d8c88f057a22" (UID: "5b93f422-104c-49a6-af4c-d8c88f057a22"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.052958 4925 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b93f422-104c-49a6-af4c-d8c88f057a22-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.053003 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.053017 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2l96\" (UniqueName: \"kubernetes.io/projected/5b93f422-104c-49a6-af4c-d8c88f057a22-kube-api-access-k2l96\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.053030 4925 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.053042 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b93f422-104c-49a6-af4c-d8c88f057a22-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.069473 4925 generic.go:334] "Generic (PLEG): container finished" podID="5b93f422-104c-49a6-af4c-d8c88f057a22" containerID="453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0" exitCode=0 Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.069506 4925 generic.go:334] "Generic (PLEG): container finished" podID="5b93f422-104c-49a6-af4c-d8c88f057a22" containerID="6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880" exitCode=143 Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.069552 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5b93f422-104c-49a6-af4c-d8c88f057a22","Type":"ContainerDied","Data":"453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0"} Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.069578 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5b93f422-104c-49a6-af4c-d8c88f057a22","Type":"ContainerDied","Data":"6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880"} Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.069587 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5b93f422-104c-49a6-af4c-d8c88f057a22","Type":"ContainerDied","Data":"f8cb4db161f19e798363e2c7a1c4635a525b36b3683a6ee44168f236e7bdeb0a"} Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.069602 4925 scope.go:117] "RemoveContainer" containerID="453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.069599 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.073306 4925 generic.go:334] "Generic (PLEG): container finished" podID="ab8d7782-81fa-4d33-b995-234a277b2056" containerID="86bcac6771065eee1b46f2e3684560533411bed503fe339d14cf30a9ed56a4fa" exitCode=143 Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.073523 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="107183ad-93d2-41f6-ae04-35f0d583befa" containerName="nova-scheduler-scheduler" containerID="cri-o://9748f99e1083aa224abbd2165c80560ceae1e299a9c20c2f0f4666f935b2b6a1" gracePeriod=30 Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.073821 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab8d7782-81fa-4d33-b995-234a277b2056","Type":"ContainerDied","Data":"86bcac6771065eee1b46f2e3684560533411bed503fe339d14cf30a9ed56a4fa"} Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.093925 4925 scope.go:117] "RemoveContainer" containerID="6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.117349 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.130753 4925 scope.go:117] "RemoveContainer" containerID="453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0" Feb 02 11:20:53 crc kubenswrapper[4925]: E0202 11:20:53.132214 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0\": container with ID starting with 453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0 not found: ID does not exist" containerID="453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.132341 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0"} err="failed to get container status \"453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0\": rpc error: code = NotFound desc = could not find container \"453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0\": container with ID starting with 453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0 not found: ID does not exist" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.132459 4925 scope.go:117] "RemoveContainer" containerID="6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880" Feb 02 11:20:53 crc kubenswrapper[4925]: E0202 11:20:53.132957 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880\": container with ID starting with 6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880 not found: ID does not exist" containerID="6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.132992 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880"} err="failed to get container status \"6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880\": rpc error: code = NotFound desc = could not find container \"6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880\": container with ID starting with 6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880 not found: ID does not exist" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.133017 4925 scope.go:117] "RemoveContainer" containerID="453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.133457 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0"} err="failed to get container status \"453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0\": rpc error: code = NotFound desc = could not find container \"453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0\": container with ID starting with 453c665340e975b944e0515441360461dcd862d760180e543ce47a780bafd4d0 not found: ID does not exist" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.133490 4925 scope.go:117] "RemoveContainer" containerID="6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.133697 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880"} err="failed to get container status \"6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880\": rpc error: code = NotFound desc = could not find container \"6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880\": container with ID starting with 6d64239704c5b8340f8d895f0797e6265c0151d37cb3ec737befe87af21bd880 not found: ID does not exist" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.145782 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.160023 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:20:53 crc kubenswrapper[4925]: E0202 11:20:53.161241 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b93f422-104c-49a6-af4c-d8c88f057a22" containerName="nova-metadata-metadata" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.161266 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b93f422-104c-49a6-af4c-d8c88f057a22" containerName="nova-metadata-metadata" Feb 02 11:20:53 crc kubenswrapper[4925]: E0202 11:20:53.161289 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18037f81-0d3a-411d-a0f3-275b5422276e" containerName="dnsmasq-dns" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.161297 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="18037f81-0d3a-411d-a0f3-275b5422276e" containerName="dnsmasq-dns" Feb 02 11:20:53 crc kubenswrapper[4925]: E0202 11:20:53.161316 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b93f422-104c-49a6-af4c-d8c88f057a22" containerName="nova-metadata-log" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.161322 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b93f422-104c-49a6-af4c-d8c88f057a22" containerName="nova-metadata-log" Feb 02 11:20:53 crc kubenswrapper[4925]: E0202 11:20:53.161342 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18037f81-0d3a-411d-a0f3-275b5422276e" containerName="init" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.161349 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="18037f81-0d3a-411d-a0f3-275b5422276e" containerName="init" Feb 02 11:20:53 crc kubenswrapper[4925]: E0202 11:20:53.161359 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23811ff6-75a2-4ea6-9ebb-bbca86b5cb38" containerName="nova-manage" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.161365 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="23811ff6-75a2-4ea6-9ebb-bbca86b5cb38" containerName="nova-manage" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.161558 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="18037f81-0d3a-411d-a0f3-275b5422276e" containerName="dnsmasq-dns" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.161583 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b93f422-104c-49a6-af4c-d8c88f057a22" containerName="nova-metadata-metadata" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.161599 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="23811ff6-75a2-4ea6-9ebb-bbca86b5cb38" containerName="nova-manage" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.161623 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b93f422-104c-49a6-af4c-d8c88f057a22" containerName="nova-metadata-log" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.162716 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.167461 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.167541 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.170521 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.256235 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snw4k\" (UniqueName: \"kubernetes.io/projected/bc22bb3f-71e3-416d-a8cf-62656c441f54-kube-api-access-snw4k\") pod \"nova-metadata-0\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.256702 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-config-data\") pod \"nova-metadata-0\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.256753 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc22bb3f-71e3-416d-a8cf-62656c441f54-logs\") pod \"nova-metadata-0\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.256836 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.256869 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.358045 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snw4k\" (UniqueName: \"kubernetes.io/projected/bc22bb3f-71e3-416d-a8cf-62656c441f54-kube-api-access-snw4k\") pod \"nova-metadata-0\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.358334 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-config-data\") pod \"nova-metadata-0\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.358479 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc22bb3f-71e3-416d-a8cf-62656c441f54-logs\") pod \"nova-metadata-0\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.358572 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.358643 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.359594 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc22bb3f-71e3-416d-a8cf-62656c441f54-logs\") pod \"nova-metadata-0\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.366823 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.367625 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.374282 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-config-data\") pod \"nova-metadata-0\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.378633 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snw4k\" (UniqueName: \"kubernetes.io/projected/bc22bb3f-71e3-416d-a8cf-62656c441f54-kube-api-access-snw4k\") pod \"nova-metadata-0\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.500116 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:20:53 crc kubenswrapper[4925]: I0202 11:20:53.978825 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:20:54 crc kubenswrapper[4925]: I0202 11:20:54.084922 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc22bb3f-71e3-416d-a8cf-62656c441f54","Type":"ContainerStarted","Data":"c86fc94f55eb214a136de289e60494051971b7a602c400e0d1214fb8d02197c0"} Feb 02 11:20:54 crc kubenswrapper[4925]: I0202 11:20:54.431655 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:20:54 crc kubenswrapper[4925]: I0202 11:20:54.579302 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhs4n\" (UniqueName: \"kubernetes.io/projected/107183ad-93d2-41f6-ae04-35f0d583befa-kube-api-access-nhs4n\") pod \"107183ad-93d2-41f6-ae04-35f0d583befa\" (UID: \"107183ad-93d2-41f6-ae04-35f0d583befa\") " Feb 02 11:20:54 crc kubenswrapper[4925]: I0202 11:20:54.579542 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107183ad-93d2-41f6-ae04-35f0d583befa-combined-ca-bundle\") pod \"107183ad-93d2-41f6-ae04-35f0d583befa\" (UID: \"107183ad-93d2-41f6-ae04-35f0d583befa\") " Feb 02 11:20:54 crc kubenswrapper[4925]: I0202 11:20:54.579588 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/107183ad-93d2-41f6-ae04-35f0d583befa-config-data\") pod \"107183ad-93d2-41f6-ae04-35f0d583befa\" (UID: \"107183ad-93d2-41f6-ae04-35f0d583befa\") " Feb 02 11:20:54 crc kubenswrapper[4925]: I0202 11:20:54.585404 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/107183ad-93d2-41f6-ae04-35f0d583befa-kube-api-access-nhs4n" (OuterVolumeSpecName: "kube-api-access-nhs4n") pod "107183ad-93d2-41f6-ae04-35f0d583befa" (UID: "107183ad-93d2-41f6-ae04-35f0d583befa"). InnerVolumeSpecName "kube-api-access-nhs4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:20:54 crc kubenswrapper[4925]: I0202 11:20:54.613283 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/107183ad-93d2-41f6-ae04-35f0d583befa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "107183ad-93d2-41f6-ae04-35f0d583befa" (UID: "107183ad-93d2-41f6-ae04-35f0d583befa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:54 crc kubenswrapper[4925]: I0202 11:20:54.616106 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/107183ad-93d2-41f6-ae04-35f0d583befa-config-data" (OuterVolumeSpecName: "config-data") pod "107183ad-93d2-41f6-ae04-35f0d583befa" (UID: "107183ad-93d2-41f6-ae04-35f0d583befa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:54 crc kubenswrapper[4925]: I0202 11:20:54.673859 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b93f422-104c-49a6-af4c-d8c88f057a22" path="/var/lib/kubelet/pods/5b93f422-104c-49a6-af4c-d8c88f057a22/volumes" Feb 02 11:20:54 crc kubenswrapper[4925]: I0202 11:20:54.681336 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107183ad-93d2-41f6-ae04-35f0d583befa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:54 crc kubenswrapper[4925]: I0202 11:20:54.681380 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/107183ad-93d2-41f6-ae04-35f0d583befa-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:54 crc kubenswrapper[4925]: I0202 11:20:54.681395 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhs4n\" (UniqueName: \"kubernetes.io/projected/107183ad-93d2-41f6-ae04-35f0d583befa-kube-api-access-nhs4n\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.098267 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc22bb3f-71e3-416d-a8cf-62656c441f54","Type":"ContainerStarted","Data":"a09d6c5ea0a269074198503eca3d93b2637ddc950501cff14c136357328e95ae"} Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.098515 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc22bb3f-71e3-416d-a8cf-62656c441f54","Type":"ContainerStarted","Data":"3904c3550ffaacd388fef703c8ff9823ed8d5c980b6c37324c34cd7af5014b3d"} Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.101379 4925 generic.go:334] "Generic (PLEG): container finished" podID="107183ad-93d2-41f6-ae04-35f0d583befa" containerID="9748f99e1083aa224abbd2165c80560ceae1e299a9c20c2f0f4666f935b2b6a1" exitCode=0 Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.101423 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"107183ad-93d2-41f6-ae04-35f0d583befa","Type":"ContainerDied","Data":"9748f99e1083aa224abbd2165c80560ceae1e299a9c20c2f0f4666f935b2b6a1"} Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.101448 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"107183ad-93d2-41f6-ae04-35f0d583befa","Type":"ContainerDied","Data":"19f91cc4d9f94b784aeb7af05d7caba7defdd3a0e2c86336d9ca9ef2af5b8bca"} Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.101466 4925 scope.go:117] "RemoveContainer" containerID="9748f99e1083aa224abbd2165c80560ceae1e299a9c20c2f0f4666f935b2b6a1" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.101561 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.128947 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.128924602 podStartE2EDuration="2.128924602s" podCreationTimestamp="2026-02-02 11:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:20:55.120354913 +0000 UTC m=+1432.124603885" watchObservedRunningTime="2026-02-02 11:20:55.128924602 +0000 UTC m=+1432.133173564" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.129643 4925 scope.go:117] "RemoveContainer" containerID="9748f99e1083aa224abbd2165c80560ceae1e299a9c20c2f0f4666f935b2b6a1" Feb 02 11:20:55 crc kubenswrapper[4925]: E0202 11:20:55.130032 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9748f99e1083aa224abbd2165c80560ceae1e299a9c20c2f0f4666f935b2b6a1\": container with ID starting with 9748f99e1083aa224abbd2165c80560ceae1e299a9c20c2f0f4666f935b2b6a1 not found: ID does not exist" containerID="9748f99e1083aa224abbd2165c80560ceae1e299a9c20c2f0f4666f935b2b6a1" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.130060 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9748f99e1083aa224abbd2165c80560ceae1e299a9c20c2f0f4666f935b2b6a1"} err="failed to get container status \"9748f99e1083aa224abbd2165c80560ceae1e299a9c20c2f0f4666f935b2b6a1\": rpc error: code = NotFound desc = could not find container \"9748f99e1083aa224abbd2165c80560ceae1e299a9c20c2f0f4666f935b2b6a1\": container with ID starting with 9748f99e1083aa224abbd2165c80560ceae1e299a9c20c2f0f4666f935b2b6a1 not found: ID does not exist" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.152215 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.161861 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.172963 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:20:55 crc kubenswrapper[4925]: E0202 11:20:55.173391 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="107183ad-93d2-41f6-ae04-35f0d583befa" containerName="nova-scheduler-scheduler" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.173417 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="107183ad-93d2-41f6-ae04-35f0d583befa" containerName="nova-scheduler-scheduler" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.173671 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="107183ad-93d2-41f6-ae04-35f0d583befa" containerName="nova-scheduler-scheduler" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.174481 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.177465 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.222401 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.292886 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab7de54-20fd-4483-b00e-ad3ef863bf47-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7ab7de54-20fd-4483-b00e-ad3ef863bf47\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.292942 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xlgm\" (UniqueName: \"kubernetes.io/projected/7ab7de54-20fd-4483-b00e-ad3ef863bf47-kube-api-access-9xlgm\") pod \"nova-scheduler-0\" (UID: \"7ab7de54-20fd-4483-b00e-ad3ef863bf47\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.293036 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab7de54-20fd-4483-b00e-ad3ef863bf47-config-data\") pod \"nova-scheduler-0\" (UID: \"7ab7de54-20fd-4483-b00e-ad3ef863bf47\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.394451 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xlgm\" (UniqueName: \"kubernetes.io/projected/7ab7de54-20fd-4483-b00e-ad3ef863bf47-kube-api-access-9xlgm\") pod \"nova-scheduler-0\" (UID: \"7ab7de54-20fd-4483-b00e-ad3ef863bf47\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.394591 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab7de54-20fd-4483-b00e-ad3ef863bf47-config-data\") pod \"nova-scheduler-0\" (UID: \"7ab7de54-20fd-4483-b00e-ad3ef863bf47\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.394654 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab7de54-20fd-4483-b00e-ad3ef863bf47-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7ab7de54-20fd-4483-b00e-ad3ef863bf47\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.401501 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab7de54-20fd-4483-b00e-ad3ef863bf47-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7ab7de54-20fd-4483-b00e-ad3ef863bf47\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.401530 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab7de54-20fd-4483-b00e-ad3ef863bf47-config-data\") pod \"nova-scheduler-0\" (UID: \"7ab7de54-20fd-4483-b00e-ad3ef863bf47\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.412911 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xlgm\" (UniqueName: \"kubernetes.io/projected/7ab7de54-20fd-4483-b00e-ad3ef863bf47-kube-api-access-9xlgm\") pod \"nova-scheduler-0\" (UID: \"7ab7de54-20fd-4483-b00e-ad3ef863bf47\") " pod="openstack/nova-scheduler-0" Feb 02 11:20:55 crc kubenswrapper[4925]: I0202 11:20:55.523524 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:20:56 crc kubenswrapper[4925]: I0202 11:20:56.058477 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:20:56 crc kubenswrapper[4925]: I0202 11:20:56.115427 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7ab7de54-20fd-4483-b00e-ad3ef863bf47","Type":"ContainerStarted","Data":"174b66c0f5b928ede0a11bb12a436491339863d632c13cedcd7a1087db496d35"} Feb 02 11:20:56 crc kubenswrapper[4925]: I0202 11:20:56.675941 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="107183ad-93d2-41f6-ae04-35f0d583befa" path="/var/lib/kubelet/pods/107183ad-93d2-41f6-ae04-35f0d583befa/volumes" Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.125974 4925 generic.go:334] "Generic (PLEG): container finished" podID="ab8d7782-81fa-4d33-b995-234a277b2056" containerID="b1e53f86089615ae05b4fc6ba187eb0d18879b2284520a8ec7eb3220f75511e9" exitCode=0 Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.126048 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab8d7782-81fa-4d33-b995-234a277b2056","Type":"ContainerDied","Data":"b1e53f86089615ae05b4fc6ba187eb0d18879b2284520a8ec7eb3220f75511e9"} Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.130761 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7ab7de54-20fd-4483-b00e-ad3ef863bf47","Type":"ContainerStarted","Data":"16796e4d3f44972434ecf542ccd9156a72b4e810c7a4564fae26e1a9099c69d9"} Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.157734 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.157715075 podStartE2EDuration="2.157715075s" podCreationTimestamp="2026-02-02 11:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:20:57.150624495 +0000 UTC m=+1434.154873467" watchObservedRunningTime="2026-02-02 11:20:57.157715075 +0000 UTC m=+1434.161964037" Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.540781 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.639848 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kww42\" (UniqueName: \"kubernetes.io/projected/ab8d7782-81fa-4d33-b995-234a277b2056-kube-api-access-kww42\") pod \"ab8d7782-81fa-4d33-b995-234a277b2056\" (UID: \"ab8d7782-81fa-4d33-b995-234a277b2056\") " Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.639961 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab8d7782-81fa-4d33-b995-234a277b2056-logs\") pod \"ab8d7782-81fa-4d33-b995-234a277b2056\" (UID: \"ab8d7782-81fa-4d33-b995-234a277b2056\") " Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.640037 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8d7782-81fa-4d33-b995-234a277b2056-config-data\") pod \"ab8d7782-81fa-4d33-b995-234a277b2056\" (UID: \"ab8d7782-81fa-4d33-b995-234a277b2056\") " Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.640107 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8d7782-81fa-4d33-b995-234a277b2056-combined-ca-bundle\") pod \"ab8d7782-81fa-4d33-b995-234a277b2056\" (UID: \"ab8d7782-81fa-4d33-b995-234a277b2056\") " Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.640971 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab8d7782-81fa-4d33-b995-234a277b2056-logs" (OuterVolumeSpecName: "logs") pod "ab8d7782-81fa-4d33-b995-234a277b2056" (UID: "ab8d7782-81fa-4d33-b995-234a277b2056"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.645455 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab8d7782-81fa-4d33-b995-234a277b2056-kube-api-access-kww42" (OuterVolumeSpecName: "kube-api-access-kww42") pod "ab8d7782-81fa-4d33-b995-234a277b2056" (UID: "ab8d7782-81fa-4d33-b995-234a277b2056"). InnerVolumeSpecName "kube-api-access-kww42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.667661 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8d7782-81fa-4d33-b995-234a277b2056-config-data" (OuterVolumeSpecName: "config-data") pod "ab8d7782-81fa-4d33-b995-234a277b2056" (UID: "ab8d7782-81fa-4d33-b995-234a277b2056"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.683293 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8d7782-81fa-4d33-b995-234a277b2056-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab8d7782-81fa-4d33-b995-234a277b2056" (UID: "ab8d7782-81fa-4d33-b995-234a277b2056"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.742300 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kww42\" (UniqueName: \"kubernetes.io/projected/ab8d7782-81fa-4d33-b995-234a277b2056-kube-api-access-kww42\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.742340 4925 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab8d7782-81fa-4d33-b995-234a277b2056-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.742353 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8d7782-81fa-4d33-b995-234a277b2056-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.742366 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8d7782-81fa-4d33-b995-234a277b2056-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:57 crc kubenswrapper[4925]: I0202 11:20:57.929663 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.139620 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab8d7782-81fa-4d33-b995-234a277b2056","Type":"ContainerDied","Data":"eb5349f3e16a7f067350f48a6d33062ec91e804372f7e8892bb22c625005d37d"} Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.139671 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.139677 4925 scope.go:117] "RemoveContainer" containerID="b1e53f86089615ae05b4fc6ba187eb0d18879b2284520a8ec7eb3220f75511e9" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.163571 4925 scope.go:117] "RemoveContainer" containerID="86bcac6771065eee1b46f2e3684560533411bed503fe339d14cf30a9ed56a4fa" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.190399 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.229347 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.240215 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 11:20:58 crc kubenswrapper[4925]: E0202 11:20:58.240621 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab8d7782-81fa-4d33-b995-234a277b2056" containerName="nova-api-log" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.240638 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8d7782-81fa-4d33-b995-234a277b2056" containerName="nova-api-log" Feb 02 11:20:58 crc kubenswrapper[4925]: E0202 11:20:58.240662 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab8d7782-81fa-4d33-b995-234a277b2056" containerName="nova-api-api" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.240671 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8d7782-81fa-4d33-b995-234a277b2056" containerName="nova-api-api" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.240897 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab8d7782-81fa-4d33-b995-234a277b2056" containerName="nova-api-log" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.240925 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab8d7782-81fa-4d33-b995-234a277b2056" containerName="nova-api-api" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.241811 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.243951 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.261382 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.356709 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178dd29c-dc37-458d-9e16-fd649b5ee0f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\") " pod="openstack/nova-api-0" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.356809 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zntpb\" (UniqueName: \"kubernetes.io/projected/178dd29c-dc37-458d-9e16-fd649b5ee0f2-kube-api-access-zntpb\") pod \"nova-api-0\" (UID: \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\") " pod="openstack/nova-api-0" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.356888 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/178dd29c-dc37-458d-9e16-fd649b5ee0f2-config-data\") pod \"nova-api-0\" (UID: \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\") " pod="openstack/nova-api-0" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.357006 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/178dd29c-dc37-458d-9e16-fd649b5ee0f2-logs\") pod \"nova-api-0\" (UID: \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\") " pod="openstack/nova-api-0" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.458327 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/178dd29c-dc37-458d-9e16-fd649b5ee0f2-logs\") pod \"nova-api-0\" (UID: \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\") " pod="openstack/nova-api-0" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.458397 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178dd29c-dc37-458d-9e16-fd649b5ee0f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\") " pod="openstack/nova-api-0" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.458433 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zntpb\" (UniqueName: \"kubernetes.io/projected/178dd29c-dc37-458d-9e16-fd649b5ee0f2-kube-api-access-zntpb\") pod \"nova-api-0\" (UID: \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\") " pod="openstack/nova-api-0" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.458487 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/178dd29c-dc37-458d-9e16-fd649b5ee0f2-config-data\") pod \"nova-api-0\" (UID: \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\") " pod="openstack/nova-api-0" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.458743 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/178dd29c-dc37-458d-9e16-fd649b5ee0f2-logs\") pod \"nova-api-0\" (UID: \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\") " pod="openstack/nova-api-0" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.476106 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178dd29c-dc37-458d-9e16-fd649b5ee0f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\") " pod="openstack/nova-api-0" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.476176 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/178dd29c-dc37-458d-9e16-fd649b5ee0f2-config-data\") pod \"nova-api-0\" (UID: \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\") " pod="openstack/nova-api-0" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.479753 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zntpb\" (UniqueName: \"kubernetes.io/projected/178dd29c-dc37-458d-9e16-fd649b5ee0f2-kube-api-access-zntpb\") pod \"nova-api-0\" (UID: \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\") " pod="openstack/nova-api-0" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.500573 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.500640 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.567161 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:20:58 crc kubenswrapper[4925]: I0202 11:20:58.679309 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab8d7782-81fa-4d33-b995-234a277b2056" path="/var/lib/kubelet/pods/ab8d7782-81fa-4d33-b995-234a277b2056/volumes" Feb 02 11:20:59 crc kubenswrapper[4925]: I0202 11:20:59.021996 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:20:59 crc kubenswrapper[4925]: I0202 11:20:59.150090 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"178dd29c-dc37-458d-9e16-fd649b5ee0f2","Type":"ContainerStarted","Data":"128872a5adf3089a97764ccbadcd8e0311a831f9bfe41d4fbdba3d83e460bfaf"} Feb 02 11:21:00 crc kubenswrapper[4925]: I0202 11:21:00.160457 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"178dd29c-dc37-458d-9e16-fd649b5ee0f2","Type":"ContainerStarted","Data":"9578adb67dab64d39ffb6dec4201fdf7459504cf544acd30c54b5b3bfc5d6770"} Feb 02 11:21:00 crc kubenswrapper[4925]: I0202 11:21:00.160768 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"178dd29c-dc37-458d-9e16-fd649b5ee0f2","Type":"ContainerStarted","Data":"4dc89b019b3dbb9b1a90440c2b3041788aeee4f8f2dbc90fdf7e52f4691e7678"} Feb 02 11:21:00 crc kubenswrapper[4925]: I0202 11:21:00.182538 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.182514011 podStartE2EDuration="2.182514011s" podCreationTimestamp="2026-02-02 11:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:21:00.178575936 +0000 UTC m=+1437.182824908" watchObservedRunningTime="2026-02-02 11:21:00.182514011 +0000 UTC m=+1437.186762993" Feb 02 11:21:00 crc kubenswrapper[4925]: I0202 11:21:00.524436 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 11:21:03 crc kubenswrapper[4925]: I0202 11:21:03.500274 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 11:21:03 crc kubenswrapper[4925]: I0202 11:21:03.500985 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 11:21:04 crc kubenswrapper[4925]: I0202 11:21:04.516401 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bc22bb3f-71e3-416d-a8cf-62656c441f54" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.177:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:21:04 crc kubenswrapper[4925]: I0202 11:21:04.517057 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bc22bb3f-71e3-416d-a8cf-62656c441f54" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.177:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 11:21:05 crc kubenswrapper[4925]: I0202 11:21:05.524597 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 11:21:05 crc kubenswrapper[4925]: I0202 11:21:05.552937 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.211844 4925 generic.go:334] "Generic (PLEG): container finished" podID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerID="0ef01eebd351c44a7555b45804b19d15327777a1bca419140dde38773e84010f" exitCode=137 Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.211929 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5","Type":"ContainerDied","Data":"0ef01eebd351c44a7555b45804b19d15327777a1bca419140dde38773e84010f"} Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.235885 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.599281 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.723780 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-log-httpd\") pod \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.724057 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-combined-ca-bundle\") pod \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.724151 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-sg-core-conf-yaml\") pod \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.724317 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-scripts\") pod \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.724379 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" (UID: "3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.724469 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-run-httpd\") pod \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.724591 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llrnm\" (UniqueName: \"kubernetes.io/projected/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-kube-api-access-llrnm\") pod \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.724679 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-config-data\") pod \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\" (UID: \"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5\") " Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.725024 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" (UID: "3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.725603 4925 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.725631 4925 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.731069 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-kube-api-access-llrnm" (OuterVolumeSpecName: "kube-api-access-llrnm") pod "3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" (UID: "3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5"). InnerVolumeSpecName "kube-api-access-llrnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.739135 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-scripts" (OuterVolumeSpecName: "scripts") pod "3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" (UID: "3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.754350 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" (UID: "3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.806633 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" (UID: "3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.827624 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llrnm\" (UniqueName: \"kubernetes.io/projected/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-kube-api-access-llrnm\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.827892 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.827908 4925 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.827917 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.839951 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-config-data" (OuterVolumeSpecName: "config-data") pod "3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" (UID: "3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:06 crc kubenswrapper[4925]: I0202 11:21:06.929536 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.223257 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5","Type":"ContainerDied","Data":"45a139414aee4f5ec2721b689b7e16c7cc41ebf955dd1f81ee10a03981fd4fc0"} Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.223590 4925 scope.go:117] "RemoveContainer" containerID="0ef01eebd351c44a7555b45804b19d15327777a1bca419140dde38773e84010f" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.223448 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.254363 4925 scope.go:117] "RemoveContainer" containerID="bb3d9af736b4b8552aa3d4d031897eaf61c2868ab752a76154c1413e259ef78e" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.262845 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.274518 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.278041 4925 scope.go:117] "RemoveContainer" containerID="0b8364fbbe1d88a3a775e887e55b327abc2d0074b6151a25cdb20720676fd49f" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.291290 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:21:07 crc kubenswrapper[4925]: E0202 11:21:07.291640 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerName="ceilometer-central-agent" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.291655 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerName="ceilometer-central-agent" Feb 02 11:21:07 crc kubenswrapper[4925]: E0202 11:21:07.291669 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerName="ceilometer-notification-agent" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.291674 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerName="ceilometer-notification-agent" Feb 02 11:21:07 crc kubenswrapper[4925]: E0202 11:21:07.291686 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerName="sg-core" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.291692 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerName="sg-core" Feb 02 11:21:07 crc kubenswrapper[4925]: E0202 11:21:07.291711 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerName="proxy-httpd" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.291717 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerName="proxy-httpd" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.291887 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerName="ceilometer-central-agent" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.291902 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerName="ceilometer-notification-agent" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.291916 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerName="sg-core" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.291923 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" containerName="proxy-httpd" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.293361 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.296038 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.296339 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.330244 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.346696 4925 scope.go:117] "RemoveContainer" containerID="1c3cdf8e5f3bc2118fb0a1b18ab5476f371f50cf9a65c30038f9e9f8651a3b11" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.437598 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-scripts\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.437645 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-config-data\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.437667 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.437708 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.437724 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9wgn\" (UniqueName: \"kubernetes.io/projected/60c8b42e-44ea-4825-bc9f-263babd207bd-kube-api-access-r9wgn\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.437748 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60c8b42e-44ea-4825-bc9f-263babd207bd-log-httpd\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.437797 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60c8b42e-44ea-4825-bc9f-263babd207bd-run-httpd\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.539449 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-scripts\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.539490 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-config-data\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.539516 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.539557 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.539573 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9wgn\" (UniqueName: \"kubernetes.io/projected/60c8b42e-44ea-4825-bc9f-263babd207bd-kube-api-access-r9wgn\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.539596 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60c8b42e-44ea-4825-bc9f-263babd207bd-log-httpd\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.539643 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60c8b42e-44ea-4825-bc9f-263babd207bd-run-httpd\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.540056 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60c8b42e-44ea-4825-bc9f-263babd207bd-run-httpd\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.541365 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60c8b42e-44ea-4825-bc9f-263babd207bd-log-httpd\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.544180 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.545009 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-config-data\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.545691 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.553056 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-scripts\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.558144 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9wgn\" (UniqueName: \"kubernetes.io/projected/60c8b42e-44ea-4825-bc9f-263babd207bd-kube-api-access-r9wgn\") pod \"ceilometer-0\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " pod="openstack/ceilometer-0" Feb 02 11:21:07 crc kubenswrapper[4925]: I0202 11:21:07.616689 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:21:08 crc kubenswrapper[4925]: I0202 11:21:08.069289 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:21:08 crc kubenswrapper[4925]: I0202 11:21:08.232171 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60c8b42e-44ea-4825-bc9f-263babd207bd","Type":"ContainerStarted","Data":"e40942e0b49ac1e95a634bdd1c3d80cf55d13d264f666b919677a6b625942af4"} Feb 02 11:21:08 crc kubenswrapper[4925]: I0202 11:21:08.567985 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:21:08 crc kubenswrapper[4925]: I0202 11:21:08.568580 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:21:08 crc kubenswrapper[4925]: I0202 11:21:08.675041 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5" path="/var/lib/kubelet/pods/3f1f471d-1fa0-42f2-91e0-39a3f3b6bfa5/volumes" Feb 02 11:21:09 crc kubenswrapper[4925]: I0202 11:21:09.650279 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="178dd29c-dc37-458d-9e16-fd649b5ee0f2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 11:21:09 crc kubenswrapper[4925]: I0202 11:21:09.650361 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="178dd29c-dc37-458d-9e16-fd649b5ee0f2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 11:21:10 crc kubenswrapper[4925]: I0202 11:21:10.253158 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60c8b42e-44ea-4825-bc9f-263babd207bd","Type":"ContainerStarted","Data":"a2e9f181c97c92e8b776a69ff7268de8d3d4f40c86a575765c869148026fe0c4"} Feb 02 11:21:11 crc kubenswrapper[4925]: I0202 11:21:11.264601 4925 generic.go:334] "Generic (PLEG): container finished" podID="c5e37af9-8c2c-4349-8496-4af3ce643c26" containerID="26722a9ff8843dbf64103f97b7e1c266525c9e970107164e240fb31472bc3990" exitCode=0 Feb 02 11:21:11 crc kubenswrapper[4925]: I0202 11:21:11.264697 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-w48pz" event={"ID":"c5e37af9-8c2c-4349-8496-4af3ce643c26","Type":"ContainerDied","Data":"26722a9ff8843dbf64103f97b7e1c266525c9e970107164e240fb31472bc3990"} Feb 02 11:21:11 crc kubenswrapper[4925]: I0202 11:21:11.267270 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60c8b42e-44ea-4825-bc9f-263babd207bd","Type":"ContainerStarted","Data":"9d173c120510d59f9905ff1f770ec7ca212a8976ebc841b88dea335e6002c557"} Feb 02 11:21:12 crc kubenswrapper[4925]: I0202 11:21:12.279654 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60c8b42e-44ea-4825-bc9f-263babd207bd","Type":"ContainerStarted","Data":"c39acb2f71d609591a61098d9d29038ea3cd3141afa84665baf1245effbc921e"} Feb 02 11:21:12 crc kubenswrapper[4925]: I0202 11:21:12.693342 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-w48pz" Feb 02 11:21:12 crc kubenswrapper[4925]: I0202 11:21:12.829606 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czh7h\" (UniqueName: \"kubernetes.io/projected/c5e37af9-8c2c-4349-8496-4af3ce643c26-kube-api-access-czh7h\") pod \"c5e37af9-8c2c-4349-8496-4af3ce643c26\" (UID: \"c5e37af9-8c2c-4349-8496-4af3ce643c26\") " Feb 02 11:21:12 crc kubenswrapper[4925]: I0202 11:21:12.829681 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-combined-ca-bundle\") pod \"c5e37af9-8c2c-4349-8496-4af3ce643c26\" (UID: \"c5e37af9-8c2c-4349-8496-4af3ce643c26\") " Feb 02 11:21:12 crc kubenswrapper[4925]: I0202 11:21:12.829773 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-scripts\") pod \"c5e37af9-8c2c-4349-8496-4af3ce643c26\" (UID: \"c5e37af9-8c2c-4349-8496-4af3ce643c26\") " Feb 02 11:21:12 crc kubenswrapper[4925]: I0202 11:21:12.830141 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-config-data\") pod \"c5e37af9-8c2c-4349-8496-4af3ce643c26\" (UID: \"c5e37af9-8c2c-4349-8496-4af3ce643c26\") " Feb 02 11:21:12 crc kubenswrapper[4925]: I0202 11:21:12.836632 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5e37af9-8c2c-4349-8496-4af3ce643c26-kube-api-access-czh7h" (OuterVolumeSpecName: "kube-api-access-czh7h") pod "c5e37af9-8c2c-4349-8496-4af3ce643c26" (UID: "c5e37af9-8c2c-4349-8496-4af3ce643c26"). InnerVolumeSpecName "kube-api-access-czh7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:21:12 crc kubenswrapper[4925]: I0202 11:21:12.837015 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-scripts" (OuterVolumeSpecName: "scripts") pod "c5e37af9-8c2c-4349-8496-4af3ce643c26" (UID: "c5e37af9-8c2c-4349-8496-4af3ce643c26"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:12 crc kubenswrapper[4925]: I0202 11:21:12.866594 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-config-data" (OuterVolumeSpecName: "config-data") pod "c5e37af9-8c2c-4349-8496-4af3ce643c26" (UID: "c5e37af9-8c2c-4349-8496-4af3ce643c26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:12 crc kubenswrapper[4925]: I0202 11:21:12.868355 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5e37af9-8c2c-4349-8496-4af3ce643c26" (UID: "c5e37af9-8c2c-4349-8496-4af3ce643c26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:12 crc kubenswrapper[4925]: I0202 11:21:12.932196 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:12 crc kubenswrapper[4925]: I0202 11:21:12.932244 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czh7h\" (UniqueName: \"kubernetes.io/projected/c5e37af9-8c2c-4349-8496-4af3ce643c26-kube-api-access-czh7h\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:12 crc kubenswrapper[4925]: I0202 11:21:12.932256 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:12 crc kubenswrapper[4925]: I0202 11:21:12.932265 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5e37af9-8c2c-4349-8496-4af3ce643c26-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.288532 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-w48pz" event={"ID":"c5e37af9-8c2c-4349-8496-4af3ce643c26","Type":"ContainerDied","Data":"4170f01440cb398b60e472e49bdaa6a3741685b1e4b40acd078f2186aac8981c"} Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.288885 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4170f01440cb398b60e472e49bdaa6a3741685b1e4b40acd078f2186aac8981c" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.288621 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-w48pz" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.390845 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 11:21:13 crc kubenswrapper[4925]: E0202 11:21:13.391304 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e37af9-8c2c-4349-8496-4af3ce643c26" containerName="nova-cell1-conductor-db-sync" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.391325 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e37af9-8c2c-4349-8496-4af3ce643c26" containerName="nova-cell1-conductor-db-sync" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.391535 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e37af9-8c2c-4349-8496-4af3ce643c26" containerName="nova-cell1-conductor-db-sync" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.393000 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.406598 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.452068 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.508541 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.509347 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.514475 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.545093 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvbln\" (UniqueName: \"kubernetes.io/projected/be99b255-6467-42af-bb3b-4e6d05fccc64-kube-api-access-pvbln\") pod \"nova-cell1-conductor-0\" (UID: \"be99b255-6467-42af-bb3b-4e6d05fccc64\") " pod="openstack/nova-cell1-conductor-0" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.545407 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be99b255-6467-42af-bb3b-4e6d05fccc64-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"be99b255-6467-42af-bb3b-4e6d05fccc64\") " pod="openstack/nova-cell1-conductor-0" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.545608 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be99b255-6467-42af-bb3b-4e6d05fccc64-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"be99b255-6467-42af-bb3b-4e6d05fccc64\") " pod="openstack/nova-cell1-conductor-0" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.647226 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be99b255-6467-42af-bb3b-4e6d05fccc64-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"be99b255-6467-42af-bb3b-4e6d05fccc64\") " pod="openstack/nova-cell1-conductor-0" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.647397 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvbln\" (UniqueName: \"kubernetes.io/projected/be99b255-6467-42af-bb3b-4e6d05fccc64-kube-api-access-pvbln\") pod \"nova-cell1-conductor-0\" (UID: \"be99b255-6467-42af-bb3b-4e6d05fccc64\") " pod="openstack/nova-cell1-conductor-0" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.647425 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be99b255-6467-42af-bb3b-4e6d05fccc64-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"be99b255-6467-42af-bb3b-4e6d05fccc64\") " pod="openstack/nova-cell1-conductor-0" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.651410 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be99b255-6467-42af-bb3b-4e6d05fccc64-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"be99b255-6467-42af-bb3b-4e6d05fccc64\") " pod="openstack/nova-cell1-conductor-0" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.652260 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be99b255-6467-42af-bb3b-4e6d05fccc64-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"be99b255-6467-42af-bb3b-4e6d05fccc64\") " pod="openstack/nova-cell1-conductor-0" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.666338 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvbln\" (UniqueName: \"kubernetes.io/projected/be99b255-6467-42af-bb3b-4e6d05fccc64-kube-api-access-pvbln\") pod \"nova-cell1-conductor-0\" (UID: \"be99b255-6467-42af-bb3b-4e6d05fccc64\") " pod="openstack/nova-cell1-conductor-0" Feb 02 11:21:13 crc kubenswrapper[4925]: I0202 11:21:13.753038 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 11:21:14 crc kubenswrapper[4925]: I0202 11:21:14.225679 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 11:21:14 crc kubenswrapper[4925]: I0202 11:21:14.300324 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"be99b255-6467-42af-bb3b-4e6d05fccc64","Type":"ContainerStarted","Data":"8572d1bbd9ba1caec1bd9012efb52fa6b1d667fe08b2c151d1f795360b821c9b"} Feb 02 11:21:14 crc kubenswrapper[4925]: I0202 11:21:14.303369 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60c8b42e-44ea-4825-bc9f-263babd207bd","Type":"ContainerStarted","Data":"490d964337a1e25b46a0ad2a0d73cacf6cf3e763b4fc63b58eb5587a1f563643"} Feb 02 11:21:14 crc kubenswrapper[4925]: I0202 11:21:14.315946 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 11:21:14 crc kubenswrapper[4925]: I0202 11:21:14.332633 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8905228649999999 podStartE2EDuration="7.332605981s" podCreationTimestamp="2026-02-02 11:21:07 +0000 UTC" firstStartedPulling="2026-02-02 11:21:08.079291686 +0000 UTC m=+1445.083540648" lastFinishedPulling="2026-02-02 11:21:13.521374802 +0000 UTC m=+1450.525623764" observedRunningTime="2026-02-02 11:21:14.319844719 +0000 UTC m=+1451.324093671" watchObservedRunningTime="2026-02-02 11:21:14.332605981 +0000 UTC m=+1451.336854943" Feb 02 11:21:15 crc kubenswrapper[4925]: I0202 11:21:15.313338 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"be99b255-6467-42af-bb3b-4e6d05fccc64","Type":"ContainerStarted","Data":"3459fdf75fd807d420bfa3296b23b1104e52741d30103d49770084c7da3482ae"} Feb 02 11:21:15 crc kubenswrapper[4925]: I0202 11:21:15.314200 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 11:21:15 crc kubenswrapper[4925]: I0202 11:21:15.329742 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.329720025 podStartE2EDuration="2.329720025s" podCreationTimestamp="2026-02-02 11:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:21:15.327449384 +0000 UTC m=+1452.331698346" watchObservedRunningTime="2026-02-02 11:21:15.329720025 +0000 UTC m=+1452.333968987" Feb 02 11:21:16 crc kubenswrapper[4925]: I0202 11:21:16.320396 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.338910 4925 generic.go:334] "Generic (PLEG): container finished" podID="38fb740d-7a25-4acc-b004-648500772071" containerID="ab9ccb90370780273fe6cc2939a57209b6acbe110f767f98f8fa60c7ba7f6312" exitCode=137 Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.338997 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"38fb740d-7a25-4acc-b004-648500772071","Type":"ContainerDied","Data":"ab9ccb90370780273fe6cc2939a57209b6acbe110f767f98f8fa60c7ba7f6312"} Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.406526 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.543316 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9vfn\" (UniqueName: \"kubernetes.io/projected/38fb740d-7a25-4acc-b004-648500772071-kube-api-access-n9vfn\") pod \"38fb740d-7a25-4acc-b004-648500772071\" (UID: \"38fb740d-7a25-4acc-b004-648500772071\") " Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.543467 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38fb740d-7a25-4acc-b004-648500772071-config-data\") pod \"38fb740d-7a25-4acc-b004-648500772071\" (UID: \"38fb740d-7a25-4acc-b004-648500772071\") " Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.543562 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fb740d-7a25-4acc-b004-648500772071-combined-ca-bundle\") pod \"38fb740d-7a25-4acc-b004-648500772071\" (UID: \"38fb740d-7a25-4acc-b004-648500772071\") " Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.552742 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fb740d-7a25-4acc-b004-648500772071-kube-api-access-n9vfn" (OuterVolumeSpecName: "kube-api-access-n9vfn") pod "38fb740d-7a25-4acc-b004-648500772071" (UID: "38fb740d-7a25-4acc-b004-648500772071"). InnerVolumeSpecName "kube-api-access-n9vfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.571552 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.571658 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.573820 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.573875 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.574281 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38fb740d-7a25-4acc-b004-648500772071-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38fb740d-7a25-4acc-b004-648500772071" (UID: "38fb740d-7a25-4acc-b004-648500772071"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.577421 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.579183 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.579290 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38fb740d-7a25-4acc-b004-648500772071-config-data" (OuterVolumeSpecName: "config-data") pod "38fb740d-7a25-4acc-b004-648500772071" (UID: "38fb740d-7a25-4acc-b004-648500772071"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.646562 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9vfn\" (UniqueName: \"kubernetes.io/projected/38fb740d-7a25-4acc-b004-648500772071-kube-api-access-n9vfn\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.646734 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38fb740d-7a25-4acc-b004-648500772071-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.646890 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38fb740d-7a25-4acc-b004-648500772071-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.755733 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-tn88d"] Feb 02 11:21:18 crc kubenswrapper[4925]: E0202 11:21:18.756244 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fb740d-7a25-4acc-b004-648500772071" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.756265 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fb740d-7a25-4acc-b004-648500772071" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.756472 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fb740d-7a25-4acc-b004-648500772071" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.757557 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.782321 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-tn88d"] Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.851148 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-dns-svc\") pod \"dnsmasq-dns-5b856c5697-tn88d\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.851219 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-tn88d\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.851319 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-config\") pod \"dnsmasq-dns-5b856c5697-tn88d\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.851345 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-tn88d\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.851381 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ngmq\" (UniqueName: \"kubernetes.io/projected/03aa6e1e-8e44-45b8-9802-10f7e388c390-kube-api-access-2ngmq\") pod \"dnsmasq-dns-5b856c5697-tn88d\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.953070 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-dns-svc\") pod \"dnsmasq-dns-5b856c5697-tn88d\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.953173 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-tn88d\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.953263 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-config\") pod \"dnsmasq-dns-5b856c5697-tn88d\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.953290 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-tn88d\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.953329 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ngmq\" (UniqueName: \"kubernetes.io/projected/03aa6e1e-8e44-45b8-9802-10f7e388c390-kube-api-access-2ngmq\") pod \"dnsmasq-dns-5b856c5697-tn88d\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.954053 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-dns-svc\") pod \"dnsmasq-dns-5b856c5697-tn88d\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.954188 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-tn88d\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.954273 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-tn88d\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.954284 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-config\") pod \"dnsmasq-dns-5b856c5697-tn88d\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:18 crc kubenswrapper[4925]: I0202 11:21:18.971554 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ngmq\" (UniqueName: \"kubernetes.io/projected/03aa6e1e-8e44-45b8-9802-10f7e388c390-kube-api-access-2ngmq\") pod \"dnsmasq-dns-5b856c5697-tn88d\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.079233 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.350535 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"38fb740d-7a25-4acc-b004-648500772071","Type":"ContainerDied","Data":"37db8534654cf7b4d97ff919fff29fc9edc189692a6ce088ca7a9b057492721d"} Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.350894 4925 scope.go:117] "RemoveContainer" containerID="ab9ccb90370780273fe6cc2939a57209b6acbe110f767f98f8fa60c7ba7f6312" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.350551 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.388698 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.405542 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.419106 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.420433 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.423704 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.428692 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.428859 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.435088 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.543347 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-tn88d"] Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.568876 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7ad506b-3504-4825-9ae1-94937ca48d1a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7ad506b-3504-4825-9ae1-94937ca48d1a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.568912 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7ad506b-3504-4825-9ae1-94937ca48d1a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7ad506b-3504-4825-9ae1-94937ca48d1a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.568975 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm8cj\" (UniqueName: \"kubernetes.io/projected/f7ad506b-3504-4825-9ae1-94937ca48d1a-kube-api-access-hm8cj\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7ad506b-3504-4825-9ae1-94937ca48d1a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.569067 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7ad506b-3504-4825-9ae1-94937ca48d1a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7ad506b-3504-4825-9ae1-94937ca48d1a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.569283 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ad506b-3504-4825-9ae1-94937ca48d1a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7ad506b-3504-4825-9ae1-94937ca48d1a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.670669 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ad506b-3504-4825-9ae1-94937ca48d1a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7ad506b-3504-4825-9ae1-94937ca48d1a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.670726 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7ad506b-3504-4825-9ae1-94937ca48d1a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7ad506b-3504-4825-9ae1-94937ca48d1a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.670820 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7ad506b-3504-4825-9ae1-94937ca48d1a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7ad506b-3504-4825-9ae1-94937ca48d1a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.670960 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm8cj\" (UniqueName: \"kubernetes.io/projected/f7ad506b-3504-4825-9ae1-94937ca48d1a-kube-api-access-hm8cj\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7ad506b-3504-4825-9ae1-94937ca48d1a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.671026 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7ad506b-3504-4825-9ae1-94937ca48d1a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7ad506b-3504-4825-9ae1-94937ca48d1a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.674904 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7ad506b-3504-4825-9ae1-94937ca48d1a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7ad506b-3504-4825-9ae1-94937ca48d1a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.675205 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7ad506b-3504-4825-9ae1-94937ca48d1a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7ad506b-3504-4825-9ae1-94937ca48d1a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.675641 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7ad506b-3504-4825-9ae1-94937ca48d1a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7ad506b-3504-4825-9ae1-94937ca48d1a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.677714 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ad506b-3504-4825-9ae1-94937ca48d1a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7ad506b-3504-4825-9ae1-94937ca48d1a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.690198 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm8cj\" (UniqueName: \"kubernetes.io/projected/f7ad506b-3504-4825-9ae1-94937ca48d1a-kube-api-access-hm8cj\") pod \"nova-cell1-novncproxy-0\" (UID: \"f7ad506b-3504-4825-9ae1-94937ca48d1a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:19 crc kubenswrapper[4925]: I0202 11:21:19.761833 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:20 crc kubenswrapper[4925]: I0202 11:21:20.233669 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:21:20 crc kubenswrapper[4925]: W0202 11:21:20.238337 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7ad506b_3504_4825_9ae1_94937ca48d1a.slice/crio-bd8b9733033b0f1db40b6b66b17f4c842b6a70287bbe210d03e499412284d8ec WatchSource:0}: Error finding container bd8b9733033b0f1db40b6b66b17f4c842b6a70287bbe210d03e499412284d8ec: Status 404 returned error can't find the container with id bd8b9733033b0f1db40b6b66b17f4c842b6a70287bbe210d03e499412284d8ec Feb 02 11:21:20 crc kubenswrapper[4925]: I0202 11:21:20.364783 4925 generic.go:334] "Generic (PLEG): container finished" podID="03aa6e1e-8e44-45b8-9802-10f7e388c390" containerID="9ec3ef4aca76c1be5df1b2741645d332e7ac899c70566f810fd27e5698eaa896" exitCode=0 Feb 02 11:21:20 crc kubenswrapper[4925]: I0202 11:21:20.365208 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-tn88d" event={"ID":"03aa6e1e-8e44-45b8-9802-10f7e388c390","Type":"ContainerDied","Data":"9ec3ef4aca76c1be5df1b2741645d332e7ac899c70566f810fd27e5698eaa896"} Feb 02 11:21:20 crc kubenswrapper[4925]: I0202 11:21:20.365241 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-tn88d" event={"ID":"03aa6e1e-8e44-45b8-9802-10f7e388c390","Type":"ContainerStarted","Data":"7ef58a5cc45861b752b5dc774d1df17a32a58da8d39c981c3e7adde1a603aea8"} Feb 02 11:21:20 crc kubenswrapper[4925]: I0202 11:21:20.380386 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f7ad506b-3504-4825-9ae1-94937ca48d1a","Type":"ContainerStarted","Data":"bd8b9733033b0f1db40b6b66b17f4c842b6a70287bbe210d03e499412284d8ec"} Feb 02 11:21:20 crc kubenswrapper[4925]: I0202 11:21:20.688282 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38fb740d-7a25-4acc-b004-648500772071" path="/var/lib/kubelet/pods/38fb740d-7a25-4acc-b004-648500772071/volumes" Feb 02 11:21:21 crc kubenswrapper[4925]: I0202 11:21:21.121769 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:21:21 crc kubenswrapper[4925]: I0202 11:21:21.318440 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:21:21 crc kubenswrapper[4925]: I0202 11:21:21.318781 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerName="ceilometer-central-agent" containerID="cri-o://a2e9f181c97c92e8b776a69ff7268de8d3d4f40c86a575765c869148026fe0c4" gracePeriod=30 Feb 02 11:21:21 crc kubenswrapper[4925]: I0202 11:21:21.318921 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerName="ceilometer-notification-agent" containerID="cri-o://9d173c120510d59f9905ff1f770ec7ca212a8976ebc841b88dea335e6002c557" gracePeriod=30 Feb 02 11:21:21 crc kubenswrapper[4925]: I0202 11:21:21.318934 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerName="sg-core" containerID="cri-o://c39acb2f71d609591a61098d9d29038ea3cd3141afa84665baf1245effbc921e" gracePeriod=30 Feb 02 11:21:21 crc kubenswrapper[4925]: I0202 11:21:21.319242 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerName="proxy-httpd" containerID="cri-o://490d964337a1e25b46a0ad2a0d73cacf6cf3e763b4fc63b58eb5587a1f563643" gracePeriod=30 Feb 02 11:21:21 crc kubenswrapper[4925]: I0202 11:21:21.392152 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-tn88d" event={"ID":"03aa6e1e-8e44-45b8-9802-10f7e388c390","Type":"ContainerStarted","Data":"afaf5a581f1b9d5c056a00246aa5b30c61849fdfe29470bf475fc6a4683729c7"} Feb 02 11:21:21 crc kubenswrapper[4925]: I0202 11:21:21.393280 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:21 crc kubenswrapper[4925]: I0202 11:21:21.394945 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="178dd29c-dc37-458d-9e16-fd649b5ee0f2" containerName="nova-api-log" containerID="cri-o://4dc89b019b3dbb9b1a90440c2b3041788aeee4f8f2dbc90fdf7e52f4691e7678" gracePeriod=30 Feb 02 11:21:21 crc kubenswrapper[4925]: I0202 11:21:21.395558 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f7ad506b-3504-4825-9ae1-94937ca48d1a","Type":"ContainerStarted","Data":"f3cafadb0ff0259535a22030483a956d30be51108385c2f62963917fff2902ec"} Feb 02 11:21:21 crc kubenswrapper[4925]: I0202 11:21:21.395628 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="178dd29c-dc37-458d-9e16-fd649b5ee0f2" containerName="nova-api-api" containerID="cri-o://9578adb67dab64d39ffb6dec4201fdf7459504cf544acd30c54b5b3bfc5d6770" gracePeriod=30 Feb 02 11:21:21 crc kubenswrapper[4925]: I0202 11:21:21.422137 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-tn88d" podStartSLOduration=3.422120724 podStartE2EDuration="3.422120724s" podCreationTimestamp="2026-02-02 11:21:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:21:21.416212276 +0000 UTC m=+1458.420461248" watchObservedRunningTime="2026-02-02 11:21:21.422120724 +0000 UTC m=+1458.426369686" Feb 02 11:21:21 crc kubenswrapper[4925]: I0202 11:21:21.444479 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.444444331 podStartE2EDuration="2.444444331s" podCreationTimestamp="2026-02-02 11:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:21:21.434653299 +0000 UTC m=+1458.438902271" watchObservedRunningTime="2026-02-02 11:21:21.444444331 +0000 UTC m=+1458.448693293" Feb 02 11:21:22 crc kubenswrapper[4925]: I0202 11:21:22.406240 4925 generic.go:334] "Generic (PLEG): container finished" podID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerID="490d964337a1e25b46a0ad2a0d73cacf6cf3e763b4fc63b58eb5587a1f563643" exitCode=0 Feb 02 11:21:22 crc kubenswrapper[4925]: I0202 11:21:22.406605 4925 generic.go:334] "Generic (PLEG): container finished" podID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerID="c39acb2f71d609591a61098d9d29038ea3cd3141afa84665baf1245effbc921e" exitCode=2 Feb 02 11:21:22 crc kubenswrapper[4925]: I0202 11:21:22.406616 4925 generic.go:334] "Generic (PLEG): container finished" podID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerID="a2e9f181c97c92e8b776a69ff7268de8d3d4f40c86a575765c869148026fe0c4" exitCode=0 Feb 02 11:21:22 crc kubenswrapper[4925]: I0202 11:21:22.406316 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60c8b42e-44ea-4825-bc9f-263babd207bd","Type":"ContainerDied","Data":"490d964337a1e25b46a0ad2a0d73cacf6cf3e763b4fc63b58eb5587a1f563643"} Feb 02 11:21:22 crc kubenswrapper[4925]: I0202 11:21:22.406689 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60c8b42e-44ea-4825-bc9f-263babd207bd","Type":"ContainerDied","Data":"c39acb2f71d609591a61098d9d29038ea3cd3141afa84665baf1245effbc921e"} Feb 02 11:21:22 crc kubenswrapper[4925]: I0202 11:21:22.406708 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60c8b42e-44ea-4825-bc9f-263babd207bd","Type":"ContainerDied","Data":"a2e9f181c97c92e8b776a69ff7268de8d3d4f40c86a575765c869148026fe0c4"} Feb 02 11:21:22 crc kubenswrapper[4925]: I0202 11:21:22.410241 4925 generic.go:334] "Generic (PLEG): container finished" podID="178dd29c-dc37-458d-9e16-fd649b5ee0f2" containerID="4dc89b019b3dbb9b1a90440c2b3041788aeee4f8f2dbc90fdf7e52f4691e7678" exitCode=143 Feb 02 11:21:22 crc kubenswrapper[4925]: I0202 11:21:22.410323 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"178dd29c-dc37-458d-9e16-fd649b5ee0f2","Type":"ContainerDied","Data":"4dc89b019b3dbb9b1a90440c2b3041788aeee4f8f2dbc90fdf7e52f4691e7678"} Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.458128 4925 generic.go:334] "Generic (PLEG): container finished" podID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerID="9d173c120510d59f9905ff1f770ec7ca212a8976ebc841b88dea335e6002c557" exitCode=0 Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.459384 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60c8b42e-44ea-4825-bc9f-263babd207bd","Type":"ContainerDied","Data":"9d173c120510d59f9905ff1f770ec7ca212a8976ebc841b88dea335e6002c557"} Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.638342 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.655815 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-combined-ca-bundle\") pod \"60c8b42e-44ea-4825-bc9f-263babd207bd\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.656136 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9wgn\" (UniqueName: \"kubernetes.io/projected/60c8b42e-44ea-4825-bc9f-263babd207bd-kube-api-access-r9wgn\") pod \"60c8b42e-44ea-4825-bc9f-263babd207bd\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.656341 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60c8b42e-44ea-4825-bc9f-263babd207bd-log-httpd\") pod \"60c8b42e-44ea-4825-bc9f-263babd207bd\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.656440 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-config-data\") pod \"60c8b42e-44ea-4825-bc9f-263babd207bd\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.656540 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60c8b42e-44ea-4825-bc9f-263babd207bd-run-httpd\") pod \"60c8b42e-44ea-4825-bc9f-263babd207bd\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.656632 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-scripts\") pod \"60c8b42e-44ea-4825-bc9f-263babd207bd\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.657049 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-sg-core-conf-yaml\") pod \"60c8b42e-44ea-4825-bc9f-263babd207bd\" (UID: \"60c8b42e-44ea-4825-bc9f-263babd207bd\") " Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.656661 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60c8b42e-44ea-4825-bc9f-263babd207bd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "60c8b42e-44ea-4825-bc9f-263babd207bd" (UID: "60c8b42e-44ea-4825-bc9f-263babd207bd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.656982 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60c8b42e-44ea-4825-bc9f-263babd207bd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "60c8b42e-44ea-4825-bc9f-263babd207bd" (UID: "60c8b42e-44ea-4825-bc9f-263babd207bd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.657839 4925 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60c8b42e-44ea-4825-bc9f-263babd207bd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.657952 4925 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60c8b42e-44ea-4825-bc9f-263babd207bd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.663477 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-scripts" (OuterVolumeSpecName: "scripts") pod "60c8b42e-44ea-4825-bc9f-263babd207bd" (UID: "60c8b42e-44ea-4825-bc9f-263babd207bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.677532 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60c8b42e-44ea-4825-bc9f-263babd207bd-kube-api-access-r9wgn" (OuterVolumeSpecName: "kube-api-access-r9wgn") pod "60c8b42e-44ea-4825-bc9f-263babd207bd" (UID: "60c8b42e-44ea-4825-bc9f-263babd207bd"). InnerVolumeSpecName "kube-api-access-r9wgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.727155 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "60c8b42e-44ea-4825-bc9f-263babd207bd" (UID: "60c8b42e-44ea-4825-bc9f-263babd207bd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.759425 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.759466 4925 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.759480 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9wgn\" (UniqueName: \"kubernetes.io/projected/60c8b42e-44ea-4825-bc9f-263babd207bd-kube-api-access-r9wgn\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.766807 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60c8b42e-44ea-4825-bc9f-263babd207bd" (UID: "60c8b42e-44ea-4825-bc9f-263babd207bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.793864 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.837149 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-config-data" (OuterVolumeSpecName: "config-data") pod "60c8b42e-44ea-4825-bc9f-263babd207bd" (UID: "60c8b42e-44ea-4825-bc9f-263babd207bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.860909 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:23 crc kubenswrapper[4925]: I0202 11:21:23.860938 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c8b42e-44ea-4825-bc9f-263babd207bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.469335 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60c8b42e-44ea-4825-bc9f-263babd207bd","Type":"ContainerDied","Data":"e40942e0b49ac1e95a634bdd1c3d80cf55d13d264f666b919677a6b625942af4"} Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.469411 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.469418 4925 scope.go:117] "RemoveContainer" containerID="490d964337a1e25b46a0ad2a0d73cacf6cf3e763b4fc63b58eb5587a1f563643" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.487521 4925 scope.go:117] "RemoveContainer" containerID="c39acb2f71d609591a61098d9d29038ea3cd3141afa84665baf1245effbc921e" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.516025 4925 scope.go:117] "RemoveContainer" containerID="9d173c120510d59f9905ff1f770ec7ca212a8976ebc841b88dea335e6002c557" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.519426 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.529837 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.544835 4925 scope.go:117] "RemoveContainer" containerID="a2e9f181c97c92e8b776a69ff7268de8d3d4f40c86a575765c869148026fe0c4" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.568318 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:21:24 crc kubenswrapper[4925]: E0202 11:21:24.568686 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerName="proxy-httpd" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.568705 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerName="proxy-httpd" Feb 02 11:21:24 crc kubenswrapper[4925]: E0202 11:21:24.568721 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerName="ceilometer-central-agent" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.568729 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerName="ceilometer-central-agent" Feb 02 11:21:24 crc kubenswrapper[4925]: E0202 11:21:24.568759 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerName="ceilometer-notification-agent" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.568766 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerName="ceilometer-notification-agent" Feb 02 11:21:24 crc kubenswrapper[4925]: E0202 11:21:24.568775 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerName="sg-core" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.568780 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerName="sg-core" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.568932 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerName="ceilometer-notification-agent" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.568941 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerName="ceilometer-central-agent" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.568956 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerName="sg-core" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.568969 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c8b42e-44ea-4825-bc9f-263babd207bd" containerName="proxy-httpd" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.570431 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.574041 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.576738 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.593281 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.672325 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dda94bdd-c210-4398-abf8-e1444f6c2cca-run-httpd\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.672369 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.672426 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.672454 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dda94bdd-c210-4398-abf8-e1444f6c2cca-log-httpd\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.672480 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-config-data\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.672600 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-447qz\" (UniqueName: \"kubernetes.io/projected/dda94bdd-c210-4398-abf8-e1444f6c2cca-kube-api-access-447qz\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.672765 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-scripts\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.677295 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60c8b42e-44ea-4825-bc9f-263babd207bd" path="/var/lib/kubelet/pods/60c8b42e-44ea-4825-bc9f-263babd207bd/volumes" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.762677 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.774199 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dda94bdd-c210-4398-abf8-e1444f6c2cca-run-httpd\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.774244 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.774305 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.774330 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dda94bdd-c210-4398-abf8-e1444f6c2cca-log-httpd\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.774349 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-config-data\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.774394 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-447qz\" (UniqueName: \"kubernetes.io/projected/dda94bdd-c210-4398-abf8-e1444f6c2cca-kube-api-access-447qz\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.774450 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-scripts\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.774755 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dda94bdd-c210-4398-abf8-e1444f6c2cca-run-httpd\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.775922 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dda94bdd-c210-4398-abf8-e1444f6c2cca-log-httpd\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.785257 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.785482 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.785742 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-scripts\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.787285 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-config-data\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.794344 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-447qz\" (UniqueName: \"kubernetes.io/projected/dda94bdd-c210-4398-abf8-e1444f6c2cca-kube-api-access-447qz\") pod \"ceilometer-0\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.968609 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:21:24 crc kubenswrapper[4925]: I0202 11:21:24.985106 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.181555 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178dd29c-dc37-458d-9e16-fd649b5ee0f2-combined-ca-bundle\") pod \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\" (UID: \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\") " Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.181655 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/178dd29c-dc37-458d-9e16-fd649b5ee0f2-config-data\") pod \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\" (UID: \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\") " Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.181777 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/178dd29c-dc37-458d-9e16-fd649b5ee0f2-logs\") pod \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\" (UID: \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\") " Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.182519 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zntpb\" (UniqueName: \"kubernetes.io/projected/178dd29c-dc37-458d-9e16-fd649b5ee0f2-kube-api-access-zntpb\") pod \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\" (UID: \"178dd29c-dc37-458d-9e16-fd649b5ee0f2\") " Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.182708 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/178dd29c-dc37-458d-9e16-fd649b5ee0f2-logs" (OuterVolumeSpecName: "logs") pod "178dd29c-dc37-458d-9e16-fd649b5ee0f2" (UID: "178dd29c-dc37-458d-9e16-fd649b5ee0f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.183768 4925 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/178dd29c-dc37-458d-9e16-fd649b5ee0f2-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.186980 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/178dd29c-dc37-458d-9e16-fd649b5ee0f2-kube-api-access-zntpb" (OuterVolumeSpecName: "kube-api-access-zntpb") pod "178dd29c-dc37-458d-9e16-fd649b5ee0f2" (UID: "178dd29c-dc37-458d-9e16-fd649b5ee0f2"). InnerVolumeSpecName "kube-api-access-zntpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.212153 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/178dd29c-dc37-458d-9e16-fd649b5ee0f2-config-data" (OuterVolumeSpecName: "config-data") pod "178dd29c-dc37-458d-9e16-fd649b5ee0f2" (UID: "178dd29c-dc37-458d-9e16-fd649b5ee0f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.220003 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/178dd29c-dc37-458d-9e16-fd649b5ee0f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "178dd29c-dc37-458d-9e16-fd649b5ee0f2" (UID: "178dd29c-dc37-458d-9e16-fd649b5ee0f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.285490 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zntpb\" (UniqueName: \"kubernetes.io/projected/178dd29c-dc37-458d-9e16-fd649b5ee0f2-kube-api-access-zntpb\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.285552 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178dd29c-dc37-458d-9e16-fd649b5ee0f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.285567 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/178dd29c-dc37-458d-9e16-fd649b5ee0f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.481279 4925 generic.go:334] "Generic (PLEG): container finished" podID="178dd29c-dc37-458d-9e16-fd649b5ee0f2" containerID="9578adb67dab64d39ffb6dec4201fdf7459504cf544acd30c54b5b3bfc5d6770" exitCode=0 Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.481612 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"178dd29c-dc37-458d-9e16-fd649b5ee0f2","Type":"ContainerDied","Data":"9578adb67dab64d39ffb6dec4201fdf7459504cf544acd30c54b5b3bfc5d6770"} Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.481644 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"178dd29c-dc37-458d-9e16-fd649b5ee0f2","Type":"ContainerDied","Data":"128872a5adf3089a97764ccbadcd8e0311a831f9bfe41d4fbdba3d83e460bfaf"} Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.481666 4925 scope.go:117] "RemoveContainer" containerID="9578adb67dab64d39ffb6dec4201fdf7459504cf544acd30c54b5b3bfc5d6770" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.481799 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.492061 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:21:25 crc kubenswrapper[4925]: W0202 11:21:25.501232 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddda94bdd_c210_4398_abf8_e1444f6c2cca.slice/crio-16bf7eaa4a424e22524b7e170678e27e577591d27699bda8fcff44ec230df620 WatchSource:0}: Error finding container 16bf7eaa4a424e22524b7e170678e27e577591d27699bda8fcff44ec230df620: Status 404 returned error can't find the container with id 16bf7eaa4a424e22524b7e170678e27e577591d27699bda8fcff44ec230df620 Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.554606 4925 scope.go:117] "RemoveContainer" containerID="4dc89b019b3dbb9b1a90440c2b3041788aeee4f8f2dbc90fdf7e52f4691e7678" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.555633 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.562916 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.577975 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 11:21:25 crc kubenswrapper[4925]: E0202 11:21:25.578401 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178dd29c-dc37-458d-9e16-fd649b5ee0f2" containerName="nova-api-log" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.578416 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="178dd29c-dc37-458d-9e16-fd649b5ee0f2" containerName="nova-api-log" Feb 02 11:21:25 crc kubenswrapper[4925]: E0202 11:21:25.578425 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178dd29c-dc37-458d-9e16-fd649b5ee0f2" containerName="nova-api-api" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.578431 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="178dd29c-dc37-458d-9e16-fd649b5ee0f2" containerName="nova-api-api" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.578591 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="178dd29c-dc37-458d-9e16-fd649b5ee0f2" containerName="nova-api-log" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.578624 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="178dd29c-dc37-458d-9e16-fd649b5ee0f2" containerName="nova-api-api" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.580299 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.584573 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.584646 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.584707 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.604618 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-config-data\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.605665 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-public-tls-certs\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.607286 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec9e926f-7778-4e5f-8781-533f72bc003f-logs\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.607339 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.607522 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.607551 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjmf6\" (UniqueName: \"kubernetes.io/projected/ec9e926f-7778-4e5f-8781-533f72bc003f-kube-api-access-gjmf6\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.611259 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.625252 4925 scope.go:117] "RemoveContainer" containerID="9578adb67dab64d39ffb6dec4201fdf7459504cf544acd30c54b5b3bfc5d6770" Feb 02 11:21:25 crc kubenswrapper[4925]: E0202 11:21:25.627181 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9578adb67dab64d39ffb6dec4201fdf7459504cf544acd30c54b5b3bfc5d6770\": container with ID starting with 9578adb67dab64d39ffb6dec4201fdf7459504cf544acd30c54b5b3bfc5d6770 not found: ID does not exist" containerID="9578adb67dab64d39ffb6dec4201fdf7459504cf544acd30c54b5b3bfc5d6770" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.627223 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9578adb67dab64d39ffb6dec4201fdf7459504cf544acd30c54b5b3bfc5d6770"} err="failed to get container status \"9578adb67dab64d39ffb6dec4201fdf7459504cf544acd30c54b5b3bfc5d6770\": rpc error: code = NotFound desc = could not find container \"9578adb67dab64d39ffb6dec4201fdf7459504cf544acd30c54b5b3bfc5d6770\": container with ID starting with 9578adb67dab64d39ffb6dec4201fdf7459504cf544acd30c54b5b3bfc5d6770 not found: ID does not exist" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.627248 4925 scope.go:117] "RemoveContainer" containerID="4dc89b019b3dbb9b1a90440c2b3041788aeee4f8f2dbc90fdf7e52f4691e7678" Feb 02 11:21:25 crc kubenswrapper[4925]: E0202 11:21:25.629359 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dc89b019b3dbb9b1a90440c2b3041788aeee4f8f2dbc90fdf7e52f4691e7678\": container with ID starting with 4dc89b019b3dbb9b1a90440c2b3041788aeee4f8f2dbc90fdf7e52f4691e7678 not found: ID does not exist" containerID="4dc89b019b3dbb9b1a90440c2b3041788aeee4f8f2dbc90fdf7e52f4691e7678" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.629388 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dc89b019b3dbb9b1a90440c2b3041788aeee4f8f2dbc90fdf7e52f4691e7678"} err="failed to get container status \"4dc89b019b3dbb9b1a90440c2b3041788aeee4f8f2dbc90fdf7e52f4691e7678\": rpc error: code = NotFound desc = could not find container \"4dc89b019b3dbb9b1a90440c2b3041788aeee4f8f2dbc90fdf7e52f4691e7678\": container with ID starting with 4dc89b019b3dbb9b1a90440c2b3041788aeee4f8f2dbc90fdf7e52f4691e7678 not found: ID does not exist" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.710218 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec9e926f-7778-4e5f-8781-533f72bc003f-logs\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.710379 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.710862 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec9e926f-7778-4e5f-8781-533f72bc003f-logs\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.711691 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.711733 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjmf6\" (UniqueName: \"kubernetes.io/projected/ec9e926f-7778-4e5f-8781-533f72bc003f-kube-api-access-gjmf6\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.711884 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-config-data\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.711952 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-public-tls-certs\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.717717 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-config-data\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.717751 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-public-tls-certs\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.717792 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.725556 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.731883 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjmf6\" (UniqueName: \"kubernetes.io/projected/ec9e926f-7778-4e5f-8781-533f72bc003f-kube-api-access-gjmf6\") pod \"nova-api-0\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " pod="openstack/nova-api-0" Feb 02 11:21:25 crc kubenswrapper[4925]: I0202 11:21:25.910550 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:21:26 crc kubenswrapper[4925]: I0202 11:21:26.384777 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:21:26 crc kubenswrapper[4925]: W0202 11:21:26.425111 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec9e926f_7778_4e5f_8781_533f72bc003f.slice/crio-1c1f520bc0b6b80f11f1ab92556fb6b809def9f7d7950ba93c496af15eb163e4 WatchSource:0}: Error finding container 1c1f520bc0b6b80f11f1ab92556fb6b809def9f7d7950ba93c496af15eb163e4: Status 404 returned error can't find the container with id 1c1f520bc0b6b80f11f1ab92556fb6b809def9f7d7950ba93c496af15eb163e4 Feb 02 11:21:26 crc kubenswrapper[4925]: I0202 11:21:26.510258 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec9e926f-7778-4e5f-8781-533f72bc003f","Type":"ContainerStarted","Data":"1c1f520bc0b6b80f11f1ab92556fb6b809def9f7d7950ba93c496af15eb163e4"} Feb 02 11:21:26 crc kubenswrapper[4925]: I0202 11:21:26.512411 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dda94bdd-c210-4398-abf8-e1444f6c2cca","Type":"ContainerStarted","Data":"20147ccdd738512e31471a3de7494d6f4a8893115838c6066724b22e28a0829f"} Feb 02 11:21:26 crc kubenswrapper[4925]: I0202 11:21:26.512474 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dda94bdd-c210-4398-abf8-e1444f6c2cca","Type":"ContainerStarted","Data":"16bf7eaa4a424e22524b7e170678e27e577591d27699bda8fcff44ec230df620"} Feb 02 11:21:26 crc kubenswrapper[4925]: I0202 11:21:26.678488 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="178dd29c-dc37-458d-9e16-fd649b5ee0f2" path="/var/lib/kubelet/pods/178dd29c-dc37-458d-9e16-fd649b5ee0f2/volumes" Feb 02 11:21:27 crc kubenswrapper[4925]: I0202 11:21:27.528681 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dda94bdd-c210-4398-abf8-e1444f6c2cca","Type":"ContainerStarted","Data":"fa1b2697c14f323d290321e9d14713706124d25a61f5a20a2737fd0b378768a7"} Feb 02 11:21:27 crc kubenswrapper[4925]: I0202 11:21:27.530539 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec9e926f-7778-4e5f-8781-533f72bc003f","Type":"ContainerStarted","Data":"d2543ebd9f9c83777fa0f7f89f03c03426624ca43ee0c731a3175d47e493e8cb"} Feb 02 11:21:27 crc kubenswrapper[4925]: I0202 11:21:27.530565 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec9e926f-7778-4e5f-8781-533f72bc003f","Type":"ContainerStarted","Data":"072ea467626439cdb66c6b6fd9ba8b4e40fae98fcea64d60ab74ddef9c6df066"} Feb 02 11:21:27 crc kubenswrapper[4925]: I0202 11:21:27.558594 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.558578931 podStartE2EDuration="2.558578931s" podCreationTimestamp="2026-02-02 11:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:21:27.546407565 +0000 UTC m=+1464.550656527" watchObservedRunningTime="2026-02-02 11:21:27.558578931 +0000 UTC m=+1464.562827893" Feb 02 11:21:28 crc kubenswrapper[4925]: I0202 11:21:28.543862 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dda94bdd-c210-4398-abf8-e1444f6c2cca","Type":"ContainerStarted","Data":"f6aba3db89a743f8686fef69f7aa2293c17f4ef744944af3e6ba21197f535a22"} Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.081209 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.149221 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-z5nww"] Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.149431 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-z5nww" podUID="674b5bd9-5722-4535-9f0e-931c61ed14d9" containerName="dnsmasq-dns" containerID="cri-o://db31309208a72b81a6de9c310adef9ed5e2b4f57b9eb2fc9394c18c0b9c69c7b" gracePeriod=10 Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.554597 4925 generic.go:334] "Generic (PLEG): container finished" podID="674b5bd9-5722-4535-9f0e-931c61ed14d9" containerID="db31309208a72b81a6de9c310adef9ed5e2b4f57b9eb2fc9394c18c0b9c69c7b" exitCode=0 Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.554933 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-z5nww" event={"ID":"674b5bd9-5722-4535-9f0e-931c61ed14d9","Type":"ContainerDied","Data":"db31309208a72b81a6de9c310adef9ed5e2b4f57b9eb2fc9394c18c0b9c69c7b"} Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.702383 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.724124 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-dns-svc\") pod \"674b5bd9-5722-4535-9f0e-931c61ed14d9\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.724173 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-config\") pod \"674b5bd9-5722-4535-9f0e-931c61ed14d9\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.724372 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf24h\" (UniqueName: \"kubernetes.io/projected/674b5bd9-5722-4535-9f0e-931c61ed14d9-kube-api-access-bf24h\") pod \"674b5bd9-5722-4535-9f0e-931c61ed14d9\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.724413 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-ovsdbserver-nb\") pod \"674b5bd9-5722-4535-9f0e-931c61ed14d9\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.724465 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-ovsdbserver-sb\") pod \"674b5bd9-5722-4535-9f0e-931c61ed14d9\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.732110 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/674b5bd9-5722-4535-9f0e-931c61ed14d9-kube-api-access-bf24h" (OuterVolumeSpecName: "kube-api-access-bf24h") pod "674b5bd9-5722-4535-9f0e-931c61ed14d9" (UID: "674b5bd9-5722-4535-9f0e-931c61ed14d9"). InnerVolumeSpecName "kube-api-access-bf24h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.764281 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.797733 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "674b5bd9-5722-4535-9f0e-931c61ed14d9" (UID: "674b5bd9-5722-4535-9f0e-931c61ed14d9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.800808 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "674b5bd9-5722-4535-9f0e-931c61ed14d9" (UID: "674b5bd9-5722-4535-9f0e-931c61ed14d9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.829649 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.830787 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-config" (OuterVolumeSpecName: "config") pod "674b5bd9-5722-4535-9f0e-931c61ed14d9" (UID: "674b5bd9-5722-4535-9f0e-931c61ed14d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:21:29 crc kubenswrapper[4925]: W0202 11:21:29.831757 4925 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/674b5bd9-5722-4535-9f0e-931c61ed14d9/volumes/kubernetes.io~configmap/config Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.831778 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-config" (OuterVolumeSpecName: "config") pod "674b5bd9-5722-4535-9f0e-931c61ed14d9" (UID: "674b5bd9-5722-4535-9f0e-931c61ed14d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.831561 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-config\") pod \"674b5bd9-5722-4535-9f0e-931c61ed14d9\" (UID: \"674b5bd9-5722-4535-9f0e-931c61ed14d9\") " Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.836853 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf24h\" (UniqueName: \"kubernetes.io/projected/674b5bd9-5722-4535-9f0e-931c61ed14d9-kube-api-access-bf24h\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.836898 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.836914 4925 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.836924 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.854820 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "674b5bd9-5722-4535-9f0e-931c61ed14d9" (UID: "674b5bd9-5722-4535-9f0e-931c61ed14d9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:21:29 crc kubenswrapper[4925]: I0202 11:21:29.939672 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/674b5bd9-5722-4535-9f0e-931c61ed14d9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.567624 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-z5nww" event={"ID":"674b5bd9-5722-4535-9f0e-931c61ed14d9","Type":"ContainerDied","Data":"f9ff24ccc6ce263b941c1e49b17bd453c8ff42dcc33028e1e171e93a7536f75c"} Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.569106 4925 scope.go:117] "RemoveContainer" containerID="db31309208a72b81a6de9c310adef9ed5e2b4f57b9eb2fc9394c18c0b9c69c7b" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.569441 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-z5nww" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.579892 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dda94bdd-c210-4398-abf8-e1444f6c2cca","Type":"ContainerStarted","Data":"253be3ee43094eefa355ad5eafb649ce0833ef6bd6bde9c41cd8a9c4131be408"} Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.580143 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.593626 4925 scope.go:117] "RemoveContainer" containerID="d8af2619e76a4bd4bd56f0ce28b15dfaf14b64cde2602776f69e14616e15f768" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.600609 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.641983 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.021551839 podStartE2EDuration="6.641958555s" podCreationTimestamp="2026-02-02 11:21:24 +0000 UTC" firstStartedPulling="2026-02-02 11:21:25.503210927 +0000 UTC m=+1462.507459889" lastFinishedPulling="2026-02-02 11:21:30.123617633 +0000 UTC m=+1467.127866605" observedRunningTime="2026-02-02 11:21:30.609428694 +0000 UTC m=+1467.613677656" watchObservedRunningTime="2026-02-02 11:21:30.641958555 +0000 UTC m=+1467.646207527" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.645305 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-z5nww"] Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.659173 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-z5nww"] Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.687544 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="674b5bd9-5722-4535-9f0e-931c61ed14d9" path="/var/lib/kubelet/pods/674b5bd9-5722-4535-9f0e-931c61ed14d9/volumes" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.793854 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-p8z6b"] Feb 02 11:21:30 crc kubenswrapper[4925]: E0202 11:21:30.794345 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674b5bd9-5722-4535-9f0e-931c61ed14d9" containerName="init" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.794365 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="674b5bd9-5722-4535-9f0e-931c61ed14d9" containerName="init" Feb 02 11:21:30 crc kubenswrapper[4925]: E0202 11:21:30.794376 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674b5bd9-5722-4535-9f0e-931c61ed14d9" containerName="dnsmasq-dns" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.794383 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="674b5bd9-5722-4535-9f0e-931c61ed14d9" containerName="dnsmasq-dns" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.794532 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="674b5bd9-5722-4535-9f0e-931c61ed14d9" containerName="dnsmasq-dns" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.795218 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p8z6b" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.797783 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.798213 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.805886 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-p8z6b"] Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.855052 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p8z6b\" (UID: \"8a09787b-0714-46c8-9617-aedde4f0d773\") " pod="openstack/nova-cell1-cell-mapping-p8z6b" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.855207 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-config-data\") pod \"nova-cell1-cell-mapping-p8z6b\" (UID: \"8a09787b-0714-46c8-9617-aedde4f0d773\") " pod="openstack/nova-cell1-cell-mapping-p8z6b" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.855249 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-scripts\") pod \"nova-cell1-cell-mapping-p8z6b\" (UID: \"8a09787b-0714-46c8-9617-aedde4f0d773\") " pod="openstack/nova-cell1-cell-mapping-p8z6b" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.855277 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfbj6\" (UniqueName: \"kubernetes.io/projected/8a09787b-0714-46c8-9617-aedde4f0d773-kube-api-access-hfbj6\") pod \"nova-cell1-cell-mapping-p8z6b\" (UID: \"8a09787b-0714-46c8-9617-aedde4f0d773\") " pod="openstack/nova-cell1-cell-mapping-p8z6b" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.957010 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-scripts\") pod \"nova-cell1-cell-mapping-p8z6b\" (UID: \"8a09787b-0714-46c8-9617-aedde4f0d773\") " pod="openstack/nova-cell1-cell-mapping-p8z6b" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.957186 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfbj6\" (UniqueName: \"kubernetes.io/projected/8a09787b-0714-46c8-9617-aedde4f0d773-kube-api-access-hfbj6\") pod \"nova-cell1-cell-mapping-p8z6b\" (UID: \"8a09787b-0714-46c8-9617-aedde4f0d773\") " pod="openstack/nova-cell1-cell-mapping-p8z6b" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.957464 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p8z6b\" (UID: \"8a09787b-0714-46c8-9617-aedde4f0d773\") " pod="openstack/nova-cell1-cell-mapping-p8z6b" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.957514 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-config-data\") pod \"nova-cell1-cell-mapping-p8z6b\" (UID: \"8a09787b-0714-46c8-9617-aedde4f0d773\") " pod="openstack/nova-cell1-cell-mapping-p8z6b" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.964961 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-scripts\") pod \"nova-cell1-cell-mapping-p8z6b\" (UID: \"8a09787b-0714-46c8-9617-aedde4f0d773\") " pod="openstack/nova-cell1-cell-mapping-p8z6b" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.971004 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p8z6b\" (UID: \"8a09787b-0714-46c8-9617-aedde4f0d773\") " pod="openstack/nova-cell1-cell-mapping-p8z6b" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.974777 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfbj6\" (UniqueName: \"kubernetes.io/projected/8a09787b-0714-46c8-9617-aedde4f0d773-kube-api-access-hfbj6\") pod \"nova-cell1-cell-mapping-p8z6b\" (UID: \"8a09787b-0714-46c8-9617-aedde4f0d773\") " pod="openstack/nova-cell1-cell-mapping-p8z6b" Feb 02 11:21:30 crc kubenswrapper[4925]: I0202 11:21:30.974859 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-config-data\") pod \"nova-cell1-cell-mapping-p8z6b\" (UID: \"8a09787b-0714-46c8-9617-aedde4f0d773\") " pod="openstack/nova-cell1-cell-mapping-p8z6b" Feb 02 11:21:31 crc kubenswrapper[4925]: I0202 11:21:31.118370 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p8z6b" Feb 02 11:21:31 crc kubenswrapper[4925]: I0202 11:21:31.557209 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-p8z6b"] Feb 02 11:21:31 crc kubenswrapper[4925]: I0202 11:21:31.592635 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p8z6b" event={"ID":"8a09787b-0714-46c8-9617-aedde4f0d773","Type":"ContainerStarted","Data":"da632268dfe40b9014d0c2b582255111f44af1fd449e8196df462bb8c53c94a9"} Feb 02 11:21:32 crc kubenswrapper[4925]: I0202 11:21:32.602371 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p8z6b" event={"ID":"8a09787b-0714-46c8-9617-aedde4f0d773","Type":"ContainerStarted","Data":"442e14b8435bc6177c9d95b3bf8c0c53e6704f6604fcef94061ddf21a18ac3a1"} Feb 02 11:21:32 crc kubenswrapper[4925]: I0202 11:21:32.632869 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-p8z6b" podStartSLOduration=2.632848032 podStartE2EDuration="2.632848032s" podCreationTimestamp="2026-02-02 11:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:21:32.620607195 +0000 UTC m=+1469.624856177" watchObservedRunningTime="2026-02-02 11:21:32.632848032 +0000 UTC m=+1469.637096994" Feb 02 11:21:35 crc kubenswrapper[4925]: I0202 11:21:35.910926 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:21:35 crc kubenswrapper[4925]: I0202 11:21:35.911868 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:21:36 crc kubenswrapper[4925]: I0202 11:21:36.926403 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec9e926f-7778-4e5f-8781-533f72bc003f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.185:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:21:36 crc kubenswrapper[4925]: I0202 11:21:36.926352 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec9e926f-7778-4e5f-8781-533f72bc003f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.185:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:21:37 crc kubenswrapper[4925]: I0202 11:21:37.642294 4925 generic.go:334] "Generic (PLEG): container finished" podID="8a09787b-0714-46c8-9617-aedde4f0d773" containerID="442e14b8435bc6177c9d95b3bf8c0c53e6704f6604fcef94061ddf21a18ac3a1" exitCode=0 Feb 02 11:21:37 crc kubenswrapper[4925]: I0202 11:21:37.642349 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p8z6b" event={"ID":"8a09787b-0714-46c8-9617-aedde4f0d773","Type":"ContainerDied","Data":"442e14b8435bc6177c9d95b3bf8c0c53e6704f6604fcef94061ddf21a18ac3a1"} Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.007699 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p8z6b" Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.123202 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfbj6\" (UniqueName: \"kubernetes.io/projected/8a09787b-0714-46c8-9617-aedde4f0d773-kube-api-access-hfbj6\") pod \"8a09787b-0714-46c8-9617-aedde4f0d773\" (UID: \"8a09787b-0714-46c8-9617-aedde4f0d773\") " Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.123282 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-combined-ca-bundle\") pod \"8a09787b-0714-46c8-9617-aedde4f0d773\" (UID: \"8a09787b-0714-46c8-9617-aedde4f0d773\") " Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.124439 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-scripts\") pod \"8a09787b-0714-46c8-9617-aedde4f0d773\" (UID: \"8a09787b-0714-46c8-9617-aedde4f0d773\") " Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.125022 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-config-data\") pod \"8a09787b-0714-46c8-9617-aedde4f0d773\" (UID: \"8a09787b-0714-46c8-9617-aedde4f0d773\") " Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.129618 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-scripts" (OuterVolumeSpecName: "scripts") pod "8a09787b-0714-46c8-9617-aedde4f0d773" (UID: "8a09787b-0714-46c8-9617-aedde4f0d773"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.129626 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a09787b-0714-46c8-9617-aedde4f0d773-kube-api-access-hfbj6" (OuterVolumeSpecName: "kube-api-access-hfbj6") pod "8a09787b-0714-46c8-9617-aedde4f0d773" (UID: "8a09787b-0714-46c8-9617-aedde4f0d773"). InnerVolumeSpecName "kube-api-access-hfbj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.153291 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-config-data" (OuterVolumeSpecName: "config-data") pod "8a09787b-0714-46c8-9617-aedde4f0d773" (UID: "8a09787b-0714-46c8-9617-aedde4f0d773"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.163365 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a09787b-0714-46c8-9617-aedde4f0d773" (UID: "8a09787b-0714-46c8-9617-aedde4f0d773"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.227398 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.227463 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfbj6\" (UniqueName: \"kubernetes.io/projected/8a09787b-0714-46c8-9617-aedde4f0d773-kube-api-access-hfbj6\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.227475 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.227487 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a09787b-0714-46c8-9617-aedde4f0d773-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.657765 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p8z6b" event={"ID":"8a09787b-0714-46c8-9617-aedde4f0d773","Type":"ContainerDied","Data":"da632268dfe40b9014d0c2b582255111f44af1fd449e8196df462bb8c53c94a9"} Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.657814 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da632268dfe40b9014d0c2b582255111f44af1fd449e8196df462bb8c53c94a9" Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.657867 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p8z6b" Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.835630 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.835964 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7ab7de54-20fd-4483-b00e-ad3ef863bf47" containerName="nova-scheduler-scheduler" containerID="cri-o://16796e4d3f44972434ecf542ccd9156a72b4e810c7a4564fae26e1a9099c69d9" gracePeriod=30 Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.850521 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.851048 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ec9e926f-7778-4e5f-8781-533f72bc003f" containerName="nova-api-api" containerID="cri-o://d2543ebd9f9c83777fa0f7f89f03c03426624ca43ee0c731a3175d47e493e8cb" gracePeriod=30 Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.851330 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ec9e926f-7778-4e5f-8781-533f72bc003f" containerName="nova-api-log" containerID="cri-o://072ea467626439cdb66c6b6fd9ba8b4e40fae98fcea64d60ab74ddef9c6df066" gracePeriod=30 Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.887561 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.887965 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bc22bb3f-71e3-416d-a8cf-62656c441f54" containerName="nova-metadata-metadata" containerID="cri-o://a09d6c5ea0a269074198503eca3d93b2637ddc950501cff14c136357328e95ae" gracePeriod=30 Feb 02 11:21:39 crc kubenswrapper[4925]: I0202 11:21:39.887841 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bc22bb3f-71e3-416d-a8cf-62656c441f54" containerName="nova-metadata-log" containerID="cri-o://3904c3550ffaacd388fef703c8ff9823ed8d5c980b6c37324c34cd7af5014b3d" gracePeriod=30 Feb 02 11:21:40 crc kubenswrapper[4925]: E0202 11:21:40.526300 4925 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="16796e4d3f44972434ecf542ccd9156a72b4e810c7a4564fae26e1a9099c69d9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 11:21:40 crc kubenswrapper[4925]: E0202 11:21:40.527885 4925 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="16796e4d3f44972434ecf542ccd9156a72b4e810c7a4564fae26e1a9099c69d9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 11:21:40 crc kubenswrapper[4925]: E0202 11:21:40.529428 4925 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="16796e4d3f44972434ecf542ccd9156a72b4e810c7a4564fae26e1a9099c69d9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 11:21:40 crc kubenswrapper[4925]: E0202 11:21:40.529470 4925 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7ab7de54-20fd-4483-b00e-ad3ef863bf47" containerName="nova-scheduler-scheduler" Feb 02 11:21:40 crc kubenswrapper[4925]: I0202 11:21:40.685430 4925 generic.go:334] "Generic (PLEG): container finished" podID="bc22bb3f-71e3-416d-a8cf-62656c441f54" containerID="3904c3550ffaacd388fef703c8ff9823ed8d5c980b6c37324c34cd7af5014b3d" exitCode=143 Feb 02 11:21:40 crc kubenswrapper[4925]: I0202 11:21:40.688195 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc22bb3f-71e3-416d-a8cf-62656c441f54","Type":"ContainerDied","Data":"3904c3550ffaacd388fef703c8ff9823ed8d5c980b6c37324c34cd7af5014b3d"} Feb 02 11:21:40 crc kubenswrapper[4925]: I0202 11:21:40.688714 4925 generic.go:334] "Generic (PLEG): container finished" podID="ec9e926f-7778-4e5f-8781-533f72bc003f" containerID="072ea467626439cdb66c6b6fd9ba8b4e40fae98fcea64d60ab74ddef9c6df066" exitCode=143 Feb 02 11:21:40 crc kubenswrapper[4925]: I0202 11:21:40.688754 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec9e926f-7778-4e5f-8781-533f72bc003f","Type":"ContainerDied","Data":"072ea467626439cdb66c6b6fd9ba8b4e40fae98fcea64d60ab74ddef9c6df066"} Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.481097 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.569720 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.659691 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-combined-ca-bundle\") pod \"ec9e926f-7778-4e5f-8781-533f72bc003f\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.659847 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjmf6\" (UniqueName: \"kubernetes.io/projected/ec9e926f-7778-4e5f-8781-533f72bc003f-kube-api-access-gjmf6\") pod \"ec9e926f-7778-4e5f-8781-533f72bc003f\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.659870 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-public-tls-certs\") pod \"ec9e926f-7778-4e5f-8781-533f72bc003f\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.659927 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-config-data\") pod \"ec9e926f-7778-4e5f-8781-533f72bc003f\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.660015 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec9e926f-7778-4e5f-8781-533f72bc003f-logs\") pod \"ec9e926f-7778-4e5f-8781-533f72bc003f\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.660099 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-internal-tls-certs\") pod \"ec9e926f-7778-4e5f-8781-533f72bc003f\" (UID: \"ec9e926f-7778-4e5f-8781-533f72bc003f\") " Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.662488 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec9e926f-7778-4e5f-8781-533f72bc003f-logs" (OuterVolumeSpecName: "logs") pod "ec9e926f-7778-4e5f-8781-533f72bc003f" (UID: "ec9e926f-7778-4e5f-8781-533f72bc003f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.665732 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec9e926f-7778-4e5f-8781-533f72bc003f-kube-api-access-gjmf6" (OuterVolumeSpecName: "kube-api-access-gjmf6") pod "ec9e926f-7778-4e5f-8781-533f72bc003f" (UID: "ec9e926f-7778-4e5f-8781-533f72bc003f"). InnerVolumeSpecName "kube-api-access-gjmf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.690619 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-config-data" (OuterVolumeSpecName: "config-data") pod "ec9e926f-7778-4e5f-8781-533f72bc003f" (UID: "ec9e926f-7778-4e5f-8781-533f72bc003f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.694352 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec9e926f-7778-4e5f-8781-533f72bc003f" (UID: "ec9e926f-7778-4e5f-8781-533f72bc003f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.713002 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ec9e926f-7778-4e5f-8781-533f72bc003f" (UID: "ec9e926f-7778-4e5f-8781-533f72bc003f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.714333 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ec9e926f-7778-4e5f-8781-533f72bc003f" (UID: "ec9e926f-7778-4e5f-8781-533f72bc003f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.716243 4925 generic.go:334] "Generic (PLEG): container finished" podID="bc22bb3f-71e3-416d-a8cf-62656c441f54" containerID="a09d6c5ea0a269074198503eca3d93b2637ddc950501cff14c136357328e95ae" exitCode=0 Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.716316 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc22bb3f-71e3-416d-a8cf-62656c441f54","Type":"ContainerDied","Data":"a09d6c5ea0a269074198503eca3d93b2637ddc950501cff14c136357328e95ae"} Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.716350 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc22bb3f-71e3-416d-a8cf-62656c441f54","Type":"ContainerDied","Data":"c86fc94f55eb214a136de289e60494051971b7a602c400e0d1214fb8d02197c0"} Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.716370 4925 scope.go:117] "RemoveContainer" containerID="a09d6c5ea0a269074198503eca3d93b2637ddc950501cff14c136357328e95ae" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.716573 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.722303 4925 generic.go:334] "Generic (PLEG): container finished" podID="ec9e926f-7778-4e5f-8781-533f72bc003f" containerID="d2543ebd9f9c83777fa0f7f89f03c03426624ca43ee0c731a3175d47e493e8cb" exitCode=0 Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.722341 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec9e926f-7778-4e5f-8781-533f72bc003f","Type":"ContainerDied","Data":"d2543ebd9f9c83777fa0f7f89f03c03426624ca43ee0c731a3175d47e493e8cb"} Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.722386 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec9e926f-7778-4e5f-8781-533f72bc003f","Type":"ContainerDied","Data":"1c1f520bc0b6b80f11f1ab92556fb6b809def9f7d7950ba93c496af15eb163e4"} Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.722360 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.743662 4925 scope.go:117] "RemoveContainer" containerID="3904c3550ffaacd388fef703c8ff9823ed8d5c980b6c37324c34cd7af5014b3d" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.757671 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.762437 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-nova-metadata-tls-certs\") pod \"bc22bb3f-71e3-416d-a8cf-62656c441f54\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.762530 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc22bb3f-71e3-416d-a8cf-62656c441f54-logs\") pod \"bc22bb3f-71e3-416d-a8cf-62656c441f54\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.762630 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snw4k\" (UniqueName: \"kubernetes.io/projected/bc22bb3f-71e3-416d-a8cf-62656c441f54-kube-api-access-snw4k\") pod \"bc22bb3f-71e3-416d-a8cf-62656c441f54\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.762685 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-combined-ca-bundle\") pod \"bc22bb3f-71e3-416d-a8cf-62656c441f54\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.762885 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-config-data\") pod \"bc22bb3f-71e3-416d-a8cf-62656c441f54\" (UID: \"bc22bb3f-71e3-416d-a8cf-62656c441f54\") " Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.763055 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc22bb3f-71e3-416d-a8cf-62656c441f54-logs" (OuterVolumeSpecName: "logs") pod "bc22bb3f-71e3-416d-a8cf-62656c441f54" (UID: "bc22bb3f-71e3-416d-a8cf-62656c441f54"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.767738 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.772858 4925 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec9e926f-7778-4e5f-8781-533f72bc003f-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.772903 4925 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.772928 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.772949 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjmf6\" (UniqueName: \"kubernetes.io/projected/ec9e926f-7778-4e5f-8781-533f72bc003f-kube-api-access-gjmf6\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.772970 4925 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.772994 4925 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc22bb3f-71e3-416d-a8cf-62656c441f54-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.773062 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9e926f-7778-4e5f-8781-533f72bc003f-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.780030 4925 scope.go:117] "RemoveContainer" containerID="a09d6c5ea0a269074198503eca3d93b2637ddc950501cff14c136357328e95ae" Feb 02 11:21:43 crc kubenswrapper[4925]: E0202 11:21:43.780383 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a09d6c5ea0a269074198503eca3d93b2637ddc950501cff14c136357328e95ae\": container with ID starting with a09d6c5ea0a269074198503eca3d93b2637ddc950501cff14c136357328e95ae not found: ID does not exist" containerID="a09d6c5ea0a269074198503eca3d93b2637ddc950501cff14c136357328e95ae" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.780415 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a09d6c5ea0a269074198503eca3d93b2637ddc950501cff14c136357328e95ae"} err="failed to get container status \"a09d6c5ea0a269074198503eca3d93b2637ddc950501cff14c136357328e95ae\": rpc error: code = NotFound desc = could not find container \"a09d6c5ea0a269074198503eca3d93b2637ddc950501cff14c136357328e95ae\": container with ID starting with a09d6c5ea0a269074198503eca3d93b2637ddc950501cff14c136357328e95ae not found: ID does not exist" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.780435 4925 scope.go:117] "RemoveContainer" containerID="3904c3550ffaacd388fef703c8ff9823ed8d5c980b6c37324c34cd7af5014b3d" Feb 02 11:21:43 crc kubenswrapper[4925]: E0202 11:21:43.780615 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3904c3550ffaacd388fef703c8ff9823ed8d5c980b6c37324c34cd7af5014b3d\": container with ID starting with 3904c3550ffaacd388fef703c8ff9823ed8d5c980b6c37324c34cd7af5014b3d not found: ID does not exist" containerID="3904c3550ffaacd388fef703c8ff9823ed8d5c980b6c37324c34cd7af5014b3d" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.780637 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3904c3550ffaacd388fef703c8ff9823ed8d5c980b6c37324c34cd7af5014b3d"} err="failed to get container status \"3904c3550ffaacd388fef703c8ff9823ed8d5c980b6c37324c34cd7af5014b3d\": rpc error: code = NotFound desc = could not find container \"3904c3550ffaacd388fef703c8ff9823ed8d5c980b6c37324c34cd7af5014b3d\": container with ID starting with 3904c3550ffaacd388fef703c8ff9823ed8d5c980b6c37324c34cd7af5014b3d not found: ID does not exist" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.780649 4925 scope.go:117] "RemoveContainer" containerID="d2543ebd9f9c83777fa0f7f89f03c03426624ca43ee0c731a3175d47e493e8cb" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.785225 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 11:21:43 crc kubenswrapper[4925]: E0202 11:21:43.785612 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc22bb3f-71e3-416d-a8cf-62656c441f54" containerName="nova-metadata-log" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.785632 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc22bb3f-71e3-416d-a8cf-62656c441f54" containerName="nova-metadata-log" Feb 02 11:21:43 crc kubenswrapper[4925]: E0202 11:21:43.785656 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a09787b-0714-46c8-9617-aedde4f0d773" containerName="nova-manage" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.785663 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a09787b-0714-46c8-9617-aedde4f0d773" containerName="nova-manage" Feb 02 11:21:43 crc kubenswrapper[4925]: E0202 11:21:43.785673 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc22bb3f-71e3-416d-a8cf-62656c441f54" containerName="nova-metadata-metadata" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.785679 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc22bb3f-71e3-416d-a8cf-62656c441f54" containerName="nova-metadata-metadata" Feb 02 11:21:43 crc kubenswrapper[4925]: E0202 11:21:43.785690 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9e926f-7778-4e5f-8781-533f72bc003f" containerName="nova-api-log" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.785695 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9e926f-7778-4e5f-8781-533f72bc003f" containerName="nova-api-log" Feb 02 11:21:43 crc kubenswrapper[4925]: E0202 11:21:43.785704 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9e926f-7778-4e5f-8781-533f72bc003f" containerName="nova-api-api" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.785709 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9e926f-7778-4e5f-8781-533f72bc003f" containerName="nova-api-api" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.785859 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc22bb3f-71e3-416d-a8cf-62656c441f54" containerName="nova-metadata-log" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.785868 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a09787b-0714-46c8-9617-aedde4f0d773" containerName="nova-manage" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.785876 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc22bb3f-71e3-416d-a8cf-62656c441f54" containerName="nova-metadata-metadata" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.785885 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec9e926f-7778-4e5f-8781-533f72bc003f" containerName="nova-api-api" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.785902 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec9e926f-7778-4e5f-8781-533f72bc003f" containerName="nova-api-log" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.786986 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.789138 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.789400 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.792646 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.796378 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc22bb3f-71e3-416d-a8cf-62656c441f54-kube-api-access-snw4k" (OuterVolumeSpecName: "kube-api-access-snw4k") pod "bc22bb3f-71e3-416d-a8cf-62656c441f54" (UID: "bc22bb3f-71e3-416d-a8cf-62656c441f54"). InnerVolumeSpecName "kube-api-access-snw4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.800101 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.817992 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc22bb3f-71e3-416d-a8cf-62656c441f54" (UID: "bc22bb3f-71e3-416d-a8cf-62656c441f54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.823582 4925 scope.go:117] "RemoveContainer" containerID="072ea467626439cdb66c6b6fd9ba8b4e40fae98fcea64d60ab74ddef9c6df066" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.833789 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-config-data" (OuterVolumeSpecName: "config-data") pod "bc22bb3f-71e3-416d-a8cf-62656c441f54" (UID: "bc22bb3f-71e3-416d-a8cf-62656c441f54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.835410 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "bc22bb3f-71e3-416d-a8cf-62656c441f54" (UID: "bc22bb3f-71e3-416d-a8cf-62656c441f54"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.841427 4925 scope.go:117] "RemoveContainer" containerID="d2543ebd9f9c83777fa0f7f89f03c03426624ca43ee0c731a3175d47e493e8cb" Feb 02 11:21:43 crc kubenswrapper[4925]: E0202 11:21:43.841895 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2543ebd9f9c83777fa0f7f89f03c03426624ca43ee0c731a3175d47e493e8cb\": container with ID starting with d2543ebd9f9c83777fa0f7f89f03c03426624ca43ee0c731a3175d47e493e8cb not found: ID does not exist" containerID="d2543ebd9f9c83777fa0f7f89f03c03426624ca43ee0c731a3175d47e493e8cb" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.841925 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2543ebd9f9c83777fa0f7f89f03c03426624ca43ee0c731a3175d47e493e8cb"} err="failed to get container status \"d2543ebd9f9c83777fa0f7f89f03c03426624ca43ee0c731a3175d47e493e8cb\": rpc error: code = NotFound desc = could not find container \"d2543ebd9f9c83777fa0f7f89f03c03426624ca43ee0c731a3175d47e493e8cb\": container with ID starting with d2543ebd9f9c83777fa0f7f89f03c03426624ca43ee0c731a3175d47e493e8cb not found: ID does not exist" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.841949 4925 scope.go:117] "RemoveContainer" containerID="072ea467626439cdb66c6b6fd9ba8b4e40fae98fcea64d60ab74ddef9c6df066" Feb 02 11:21:43 crc kubenswrapper[4925]: E0202 11:21:43.842231 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"072ea467626439cdb66c6b6fd9ba8b4e40fae98fcea64d60ab74ddef9c6df066\": container with ID starting with 072ea467626439cdb66c6b6fd9ba8b4e40fae98fcea64d60ab74ddef9c6df066 not found: ID does not exist" containerID="072ea467626439cdb66c6b6fd9ba8b4e40fae98fcea64d60ab74ddef9c6df066" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.842254 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"072ea467626439cdb66c6b6fd9ba8b4e40fae98fcea64d60ab74ddef9c6df066"} err="failed to get container status \"072ea467626439cdb66c6b6fd9ba8b4e40fae98fcea64d60ab74ddef9c6df066\": rpc error: code = NotFound desc = could not find container \"072ea467626439cdb66c6b6fd9ba8b4e40fae98fcea64d60ab74ddef9c6df066\": container with ID starting with 072ea467626439cdb66c6b6fd9ba8b4e40fae98fcea64d60ab74ddef9c6df066 not found: ID does not exist" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.875705 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.875732 4925 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.875745 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snw4k\" (UniqueName: \"kubernetes.io/projected/bc22bb3f-71e3-416d-a8cf-62656c441f54-kube-api-access-snw4k\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.875754 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc22bb3f-71e3-416d-a8cf-62656c441f54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.976867 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-public-tls-certs\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.976931 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-logs\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.976973 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w25c\" (UniqueName: \"kubernetes.io/projected/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-kube-api-access-8w25c\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.977003 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.977036 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:43 crc kubenswrapper[4925]: I0202 11:21:43.977188 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-config-data\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.047666 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.057026 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.067808 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.069675 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.071782 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.073549 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.078349 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.078593 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-config-data\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.078722 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3287d1a5-371d-44d3-a215-6937bf4da1a1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3287d1a5-371d-44d3-a215-6937bf4da1a1\") " pod="openstack/nova-metadata-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.078815 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3287d1a5-371d-44d3-a215-6937bf4da1a1-config-data\") pod \"nova-metadata-0\" (UID: \"3287d1a5-371d-44d3-a215-6937bf4da1a1\") " pod="openstack/nova-metadata-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.079050 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qczvm\" (UniqueName: \"kubernetes.io/projected/3287d1a5-371d-44d3-a215-6937bf4da1a1-kube-api-access-qczvm\") pod \"nova-metadata-0\" (UID: \"3287d1a5-371d-44d3-a215-6937bf4da1a1\") " pod="openstack/nova-metadata-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.079187 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-public-tls-certs\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.079284 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-logs\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.079349 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3287d1a5-371d-44d3-a215-6937bf4da1a1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3287d1a5-371d-44d3-a215-6937bf4da1a1\") " pod="openstack/nova-metadata-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.079379 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w25c\" (UniqueName: \"kubernetes.io/projected/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-kube-api-access-8w25c\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.079433 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.079487 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3287d1a5-371d-44d3-a215-6937bf4da1a1-logs\") pod \"nova-metadata-0\" (UID: \"3287d1a5-371d-44d3-a215-6937bf4da1a1\") " pod="openstack/nova-metadata-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.082706 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-logs\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.083750 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-config-data\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.085267 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.085283 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-public-tls-certs\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.086235 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.105118 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.128746 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w25c\" (UniqueName: \"kubernetes.io/projected/2526e2c5-e58e-4e8a-b55d-ec5d06a490d1-kube-api-access-8w25c\") pod \"nova-api-0\" (UID: \"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1\") " pod="openstack/nova-api-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.180393 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3287d1a5-371d-44d3-a215-6937bf4da1a1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3287d1a5-371d-44d3-a215-6937bf4da1a1\") " pod="openstack/nova-metadata-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.180990 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3287d1a5-371d-44d3-a215-6937bf4da1a1-config-data\") pod \"nova-metadata-0\" (UID: \"3287d1a5-371d-44d3-a215-6937bf4da1a1\") " pod="openstack/nova-metadata-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.181169 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qczvm\" (UniqueName: \"kubernetes.io/projected/3287d1a5-371d-44d3-a215-6937bf4da1a1-kube-api-access-qczvm\") pod \"nova-metadata-0\" (UID: \"3287d1a5-371d-44d3-a215-6937bf4da1a1\") " pod="openstack/nova-metadata-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.181322 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3287d1a5-371d-44d3-a215-6937bf4da1a1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3287d1a5-371d-44d3-a215-6937bf4da1a1\") " pod="openstack/nova-metadata-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.181425 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3287d1a5-371d-44d3-a215-6937bf4da1a1-logs\") pod \"nova-metadata-0\" (UID: \"3287d1a5-371d-44d3-a215-6937bf4da1a1\") " pod="openstack/nova-metadata-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.182007 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3287d1a5-371d-44d3-a215-6937bf4da1a1-logs\") pod \"nova-metadata-0\" (UID: \"3287d1a5-371d-44d3-a215-6937bf4da1a1\") " pod="openstack/nova-metadata-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.186059 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3287d1a5-371d-44d3-a215-6937bf4da1a1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3287d1a5-371d-44d3-a215-6937bf4da1a1\") " pod="openstack/nova-metadata-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.186430 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3287d1a5-371d-44d3-a215-6937bf4da1a1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3287d1a5-371d-44d3-a215-6937bf4da1a1\") " pod="openstack/nova-metadata-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.187199 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3287d1a5-371d-44d3-a215-6937bf4da1a1-config-data\") pod \"nova-metadata-0\" (UID: \"3287d1a5-371d-44d3-a215-6937bf4da1a1\") " pod="openstack/nova-metadata-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.197610 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qczvm\" (UniqueName: \"kubernetes.io/projected/3287d1a5-371d-44d3-a215-6937bf4da1a1-kube-api-access-qczvm\") pod \"nova-metadata-0\" (UID: \"3287d1a5-371d-44d3-a215-6937bf4da1a1\") " pod="openstack/nova-metadata-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.395243 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.413557 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.675436 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc22bb3f-71e3-416d-a8cf-62656c441f54" path="/var/lib/kubelet/pods/bc22bb3f-71e3-416d-a8cf-62656c441f54/volumes" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.676179 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec9e926f-7778-4e5f-8781-533f72bc003f" path="/var/lib/kubelet/pods/ec9e926f-7778-4e5f-8781-533f72bc003f/volumes" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.735406 4925 generic.go:334] "Generic (PLEG): container finished" podID="7ab7de54-20fd-4483-b00e-ad3ef863bf47" containerID="16796e4d3f44972434ecf542ccd9156a72b4e810c7a4564fae26e1a9099c69d9" exitCode=0 Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.735496 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7ab7de54-20fd-4483-b00e-ad3ef863bf47","Type":"ContainerDied","Data":"16796e4d3f44972434ecf542ccd9156a72b4e810c7a4564fae26e1a9099c69d9"} Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.880716 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:21:44 crc kubenswrapper[4925]: W0202 11:21:44.883150 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2526e2c5_e58e_4e8a_b55d_ec5d06a490d1.slice/crio-605ab9d074d51470bae3e109c1d6d64c8c6e77467b78a7ea62ef3790e0683095 WatchSource:0}: Error finding container 605ab9d074d51470bae3e109c1d6d64c8c6e77467b78a7ea62ef3790e0683095: Status 404 returned error can't find the container with id 605ab9d074d51470bae3e109c1d6d64c8c6e77467b78a7ea62ef3790e0683095 Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.885134 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.891043 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.893983 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab7de54-20fd-4483-b00e-ad3ef863bf47-config-data\") pod \"7ab7de54-20fd-4483-b00e-ad3ef863bf47\" (UID: \"7ab7de54-20fd-4483-b00e-ad3ef863bf47\") " Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.894025 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xlgm\" (UniqueName: \"kubernetes.io/projected/7ab7de54-20fd-4483-b00e-ad3ef863bf47-kube-api-access-9xlgm\") pod \"7ab7de54-20fd-4483-b00e-ad3ef863bf47\" (UID: \"7ab7de54-20fd-4483-b00e-ad3ef863bf47\") " Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.894042 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab7de54-20fd-4483-b00e-ad3ef863bf47-combined-ca-bundle\") pod \"7ab7de54-20fd-4483-b00e-ad3ef863bf47\" (UID: \"7ab7de54-20fd-4483-b00e-ad3ef863bf47\") " Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.903981 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab7de54-20fd-4483-b00e-ad3ef863bf47-kube-api-access-9xlgm" (OuterVolumeSpecName: "kube-api-access-9xlgm") pod "7ab7de54-20fd-4483-b00e-ad3ef863bf47" (UID: "7ab7de54-20fd-4483-b00e-ad3ef863bf47"). InnerVolumeSpecName "kube-api-access-9xlgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.938354 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab7de54-20fd-4483-b00e-ad3ef863bf47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ab7de54-20fd-4483-b00e-ad3ef863bf47" (UID: "7ab7de54-20fd-4483-b00e-ad3ef863bf47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.951779 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab7de54-20fd-4483-b00e-ad3ef863bf47-config-data" (OuterVolumeSpecName: "config-data") pod "7ab7de54-20fd-4483-b00e-ad3ef863bf47" (UID: "7ab7de54-20fd-4483-b00e-ad3ef863bf47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.997031 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab7de54-20fd-4483-b00e-ad3ef863bf47-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.997371 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xlgm\" (UniqueName: \"kubernetes.io/projected/7ab7de54-20fd-4483-b00e-ad3ef863bf47-kube-api-access-9xlgm\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:44 crc kubenswrapper[4925]: I0202 11:21:44.997395 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab7de54-20fd-4483-b00e-ad3ef863bf47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.746318 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3287d1a5-371d-44d3-a215-6937bf4da1a1","Type":"ContainerStarted","Data":"efb67b0617be299780413de37e01c25e6ba44e9921bd88e79de14edf926a6fb5"} Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.746646 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3287d1a5-371d-44d3-a215-6937bf4da1a1","Type":"ContainerStarted","Data":"f43bc498b06538dbbb1e81bdae813c54b0d9cd3969aecbe7027c9f0814158745"} Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.746661 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3287d1a5-371d-44d3-a215-6937bf4da1a1","Type":"ContainerStarted","Data":"fc75396540edfaa3be65be16b1b00004a5afb4d6be8d6c387e3e99e01f163974"} Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.748389 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7ab7de54-20fd-4483-b00e-ad3ef863bf47","Type":"ContainerDied","Data":"174b66c0f5b928ede0a11bb12a436491339863d632c13cedcd7a1087db496d35"} Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.748425 4925 scope.go:117] "RemoveContainer" containerID="16796e4d3f44972434ecf542ccd9156a72b4e810c7a4564fae26e1a9099c69d9" Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.748517 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.752334 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1","Type":"ContainerStarted","Data":"833878af1f5859e27e6f5b828104ab0a11c8b54534fcf29804ef95f81c3d74c9"} Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.752398 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1","Type":"ContainerStarted","Data":"9ab7bb6c876f65a96a0e12e17048db033c18caffe8e84d54fd4945dd0b28fc5e"} Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.752411 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2526e2c5-e58e-4e8a-b55d-ec5d06a490d1","Type":"ContainerStarted","Data":"605ab9d074d51470bae3e109c1d6d64c8c6e77467b78a7ea62ef3790e0683095"} Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.772971 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.772949665 podStartE2EDuration="1.772949665s" podCreationTimestamp="2026-02-02 11:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:21:45.769330128 +0000 UTC m=+1482.773579090" watchObservedRunningTime="2026-02-02 11:21:45.772949665 +0000 UTC m=+1482.777198627" Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.799549 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.811750 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.831842 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:21:45 crc kubenswrapper[4925]: E0202 11:21:45.832525 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab7de54-20fd-4483-b00e-ad3ef863bf47" containerName="nova-scheduler-scheduler" Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.832550 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab7de54-20fd-4483-b00e-ad3ef863bf47" containerName="nova-scheduler-scheduler" Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.832770 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab7de54-20fd-4483-b00e-ad3ef863bf47" containerName="nova-scheduler-scheduler" Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.833508 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.835773 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.837959 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.837935864 podStartE2EDuration="2.837935864s" podCreationTimestamp="2026-02-02 11:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:21:45.810096169 +0000 UTC m=+1482.814345161" watchObservedRunningTime="2026-02-02 11:21:45.837935864 +0000 UTC m=+1482.842184826" Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.852282 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.912720 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54598a97-2180-4fe5-a267-970c64919ba0-config-data\") pod \"nova-scheduler-0\" (UID: \"54598a97-2180-4fe5-a267-970c64919ba0\") " pod="openstack/nova-scheduler-0" Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.912792 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54598a97-2180-4fe5-a267-970c64919ba0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"54598a97-2180-4fe5-a267-970c64919ba0\") " pod="openstack/nova-scheduler-0" Feb 02 11:21:45 crc kubenswrapper[4925]: I0202 11:21:45.912850 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm8x6\" (UniqueName: \"kubernetes.io/projected/54598a97-2180-4fe5-a267-970c64919ba0-kube-api-access-gm8x6\") pod \"nova-scheduler-0\" (UID: \"54598a97-2180-4fe5-a267-970c64919ba0\") " pod="openstack/nova-scheduler-0" Feb 02 11:21:46 crc kubenswrapper[4925]: I0202 11:21:46.014756 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54598a97-2180-4fe5-a267-970c64919ba0-config-data\") pod \"nova-scheduler-0\" (UID: \"54598a97-2180-4fe5-a267-970c64919ba0\") " pod="openstack/nova-scheduler-0" Feb 02 11:21:46 crc kubenswrapper[4925]: I0202 11:21:46.014840 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54598a97-2180-4fe5-a267-970c64919ba0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"54598a97-2180-4fe5-a267-970c64919ba0\") " pod="openstack/nova-scheduler-0" Feb 02 11:21:46 crc kubenswrapper[4925]: I0202 11:21:46.014898 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm8x6\" (UniqueName: \"kubernetes.io/projected/54598a97-2180-4fe5-a267-970c64919ba0-kube-api-access-gm8x6\") pod \"nova-scheduler-0\" (UID: \"54598a97-2180-4fe5-a267-970c64919ba0\") " pod="openstack/nova-scheduler-0" Feb 02 11:21:46 crc kubenswrapper[4925]: I0202 11:21:46.020787 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54598a97-2180-4fe5-a267-970c64919ba0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"54598a97-2180-4fe5-a267-970c64919ba0\") " pod="openstack/nova-scheduler-0" Feb 02 11:21:46 crc kubenswrapper[4925]: I0202 11:21:46.024784 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54598a97-2180-4fe5-a267-970c64919ba0-config-data\") pod \"nova-scheduler-0\" (UID: \"54598a97-2180-4fe5-a267-970c64919ba0\") " pod="openstack/nova-scheduler-0" Feb 02 11:21:46 crc kubenswrapper[4925]: I0202 11:21:46.033525 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm8x6\" (UniqueName: \"kubernetes.io/projected/54598a97-2180-4fe5-a267-970c64919ba0-kube-api-access-gm8x6\") pod \"nova-scheduler-0\" (UID: \"54598a97-2180-4fe5-a267-970c64919ba0\") " pod="openstack/nova-scheduler-0" Feb 02 11:21:46 crc kubenswrapper[4925]: I0202 11:21:46.152131 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:21:46 crc kubenswrapper[4925]: I0202 11:21:46.590823 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:21:46 crc kubenswrapper[4925]: I0202 11:21:46.677686 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ab7de54-20fd-4483-b00e-ad3ef863bf47" path="/var/lib/kubelet/pods/7ab7de54-20fd-4483-b00e-ad3ef863bf47/volumes" Feb 02 11:21:46 crc kubenswrapper[4925]: I0202 11:21:46.784501 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54598a97-2180-4fe5-a267-970c64919ba0","Type":"ContainerStarted","Data":"a880ae729462f61bc20ab2df4842e429be80ad64e9b71e55abe876e50f832b7c"} Feb 02 11:21:47 crc kubenswrapper[4925]: I0202 11:21:47.794040 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54598a97-2180-4fe5-a267-970c64919ba0","Type":"ContainerStarted","Data":"ac6868ae63db576fa2334070220262762c7ba158fa65b5145d7c97450a37d8a0"} Feb 02 11:21:47 crc kubenswrapper[4925]: I0202 11:21:47.811204 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.811186419 podStartE2EDuration="2.811186419s" podCreationTimestamp="2026-02-02 11:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:21:47.808939869 +0000 UTC m=+1484.813188831" watchObservedRunningTime="2026-02-02 11:21:47.811186419 +0000 UTC m=+1484.815435381" Feb 02 11:21:48 crc kubenswrapper[4925]: I0202 11:21:48.501597 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="bc22bb3f-71e3-416d-a8cf-62656c441f54" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.177:8775/\": dial tcp 10.217.0.177:8775: i/o timeout" Feb 02 11:21:48 crc kubenswrapper[4925]: I0202 11:21:48.502616 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="bc22bb3f-71e3-416d-a8cf-62656c441f54" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.177:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 11:21:49 crc kubenswrapper[4925]: I0202 11:21:49.395555 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 11:21:49 crc kubenswrapper[4925]: I0202 11:21:49.395913 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 11:21:51 crc kubenswrapper[4925]: I0202 11:21:51.153043 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 11:21:54 crc kubenswrapper[4925]: I0202 11:21:54.396340 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 11:21:54 crc kubenswrapper[4925]: I0202 11:21:54.396614 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 11:21:54 crc kubenswrapper[4925]: I0202 11:21:54.414820 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:21:54 crc kubenswrapper[4925]: I0202 11:21:54.414864 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:21:54 crc kubenswrapper[4925]: I0202 11:21:54.972347 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 11:21:55 crc kubenswrapper[4925]: I0202 11:21:55.409326 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3287d1a5-371d-44d3-a215-6937bf4da1a1" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:21:55 crc kubenswrapper[4925]: I0202 11:21:55.409350 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3287d1a5-371d-44d3-a215-6937bf4da1a1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:21:55 crc kubenswrapper[4925]: I0202 11:21:55.427313 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2526e2c5-e58e-4e8a-b55d-ec5d06a490d1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.187:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:21:55 crc kubenswrapper[4925]: I0202 11:21:55.427321 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2526e2c5-e58e-4e8a-b55d-ec5d06a490d1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.187:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:21:56 crc kubenswrapper[4925]: I0202 11:21:56.152545 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 11:21:56 crc kubenswrapper[4925]: I0202 11:21:56.183784 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 11:21:56 crc kubenswrapper[4925]: I0202 11:21:56.889756 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 11:21:57 crc kubenswrapper[4925]: I0202 11:21:57.862457 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 11:21:57 crc kubenswrapper[4925]: I0202 11:21:57.862968 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c60f25de-220a-4eb1-b1da-30faf1a27cf8" containerName="kube-state-metrics" containerID="cri-o://567a184ef5a1fb0ab12f72ce12aae63f1e5b06dc4def9e889f7db3a284410b25" gracePeriod=30 Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.383847 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.469582 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxr2c\" (UniqueName: \"kubernetes.io/projected/c60f25de-220a-4eb1-b1da-30faf1a27cf8-kube-api-access-qxr2c\") pod \"c60f25de-220a-4eb1-b1da-30faf1a27cf8\" (UID: \"c60f25de-220a-4eb1-b1da-30faf1a27cf8\") " Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.482453 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c60f25de-220a-4eb1-b1da-30faf1a27cf8-kube-api-access-qxr2c" (OuterVolumeSpecName: "kube-api-access-qxr2c") pod "c60f25de-220a-4eb1-b1da-30faf1a27cf8" (UID: "c60f25de-220a-4eb1-b1da-30faf1a27cf8"). InnerVolumeSpecName "kube-api-access-qxr2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.573119 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxr2c\" (UniqueName: \"kubernetes.io/projected/c60f25de-220a-4eb1-b1da-30faf1a27cf8-kube-api-access-qxr2c\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.882577 4925 generic.go:334] "Generic (PLEG): container finished" podID="c60f25de-220a-4eb1-b1da-30faf1a27cf8" containerID="567a184ef5a1fb0ab12f72ce12aae63f1e5b06dc4def9e889f7db3a284410b25" exitCode=2 Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.882617 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c60f25de-220a-4eb1-b1da-30faf1a27cf8","Type":"ContainerDied","Data":"567a184ef5a1fb0ab12f72ce12aae63f1e5b06dc4def9e889f7db3a284410b25"} Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.882642 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c60f25de-220a-4eb1-b1da-30faf1a27cf8","Type":"ContainerDied","Data":"2042b6970a9af4f5fe8622e1e0bb61b7c17e1572187e7f1d61b9814bbcd8058e"} Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.882658 4925 scope.go:117] "RemoveContainer" containerID="567a184ef5a1fb0ab12f72ce12aae63f1e5b06dc4def9e889f7db3a284410b25" Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.882777 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.913800 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.924234 4925 scope.go:117] "RemoveContainer" containerID="567a184ef5a1fb0ab12f72ce12aae63f1e5b06dc4def9e889f7db3a284410b25" Feb 02 11:21:58 crc kubenswrapper[4925]: E0202 11:21:58.924694 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"567a184ef5a1fb0ab12f72ce12aae63f1e5b06dc4def9e889f7db3a284410b25\": container with ID starting with 567a184ef5a1fb0ab12f72ce12aae63f1e5b06dc4def9e889f7db3a284410b25 not found: ID does not exist" containerID="567a184ef5a1fb0ab12f72ce12aae63f1e5b06dc4def9e889f7db3a284410b25" Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.924746 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567a184ef5a1fb0ab12f72ce12aae63f1e5b06dc4def9e889f7db3a284410b25"} err="failed to get container status \"567a184ef5a1fb0ab12f72ce12aae63f1e5b06dc4def9e889f7db3a284410b25\": rpc error: code = NotFound desc = could not find container \"567a184ef5a1fb0ab12f72ce12aae63f1e5b06dc4def9e889f7db3a284410b25\": container with ID starting with 567a184ef5a1fb0ab12f72ce12aae63f1e5b06dc4def9e889f7db3a284410b25 not found: ID does not exist" Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.926368 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.937353 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 11:21:58 crc kubenswrapper[4925]: E0202 11:21:58.938007 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60f25de-220a-4eb1-b1da-30faf1a27cf8" containerName="kube-state-metrics" Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.938030 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60f25de-220a-4eb1-b1da-30faf1a27cf8" containerName="kube-state-metrics" Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.938391 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="c60f25de-220a-4eb1-b1da-30faf1a27cf8" containerName="kube-state-metrics" Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.940233 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.946496 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.947098 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.954058 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.980373 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f\") " pod="openstack/kube-state-metrics-0" Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.980603 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v96lf\" (UniqueName: \"kubernetes.io/projected/7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f-kube-api-access-v96lf\") pod \"kube-state-metrics-0\" (UID: \"7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f\") " pod="openstack/kube-state-metrics-0" Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.980963 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f\") " pod="openstack/kube-state-metrics-0" Feb 02 11:21:58 crc kubenswrapper[4925]: I0202 11:21:58.981149 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f\") " pod="openstack/kube-state-metrics-0" Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.082909 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v96lf\" (UniqueName: \"kubernetes.io/projected/7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f-kube-api-access-v96lf\") pod \"kube-state-metrics-0\" (UID: \"7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f\") " pod="openstack/kube-state-metrics-0" Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.083042 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f\") " pod="openstack/kube-state-metrics-0" Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.083113 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f\") " pod="openstack/kube-state-metrics-0" Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.083162 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f\") " pod="openstack/kube-state-metrics-0" Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.087392 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f\") " pod="openstack/kube-state-metrics-0" Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.087618 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f\") " pod="openstack/kube-state-metrics-0" Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.091591 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f\") " pod="openstack/kube-state-metrics-0" Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.100179 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v96lf\" (UniqueName: \"kubernetes.io/projected/7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f-kube-api-access-v96lf\") pod \"kube-state-metrics-0\" (UID: \"7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f\") " pod="openstack/kube-state-metrics-0" Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.125177 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.125520 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerName="sg-core" containerID="cri-o://f6aba3db89a743f8686fef69f7aa2293c17f4ef744944af3e6ba21197f535a22" gracePeriod=30 Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.125699 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerName="proxy-httpd" containerID="cri-o://253be3ee43094eefa355ad5eafb649ce0833ef6bd6bde9c41cd8a9c4131be408" gracePeriod=30 Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.125790 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerName="ceilometer-notification-agent" containerID="cri-o://fa1b2697c14f323d290321e9d14713706124d25a61f5a20a2737fd0b378768a7" gracePeriod=30 Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.125450 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerName="ceilometer-central-agent" containerID="cri-o://20147ccdd738512e31471a3de7494d6f4a8893115838c6066724b22e28a0829f" gracePeriod=30 Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.267128 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 11:21:59 crc kubenswrapper[4925]: W0202 11:21:59.732523 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bc4ffdd_b5f2_40e7_9c73_0c5efb6ee28f.slice/crio-ca3aebad96bb6026149b4ceecc0eb9a7926c4b155fece298da90d8c35303c01a WatchSource:0}: Error finding container ca3aebad96bb6026149b4ceecc0eb9a7926c4b155fece298da90d8c35303c01a: Status 404 returned error can't find the container with id ca3aebad96bb6026149b4ceecc0eb9a7926c4b155fece298da90d8c35303c01a Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.732829 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.891598 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f","Type":"ContainerStarted","Data":"ca3aebad96bb6026149b4ceecc0eb9a7926c4b155fece298da90d8c35303c01a"} Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.897263 4925 generic.go:334] "Generic (PLEG): container finished" podID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerID="253be3ee43094eefa355ad5eafb649ce0833ef6bd6bde9c41cd8a9c4131be408" exitCode=0 Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.897300 4925 generic.go:334] "Generic (PLEG): container finished" podID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerID="f6aba3db89a743f8686fef69f7aa2293c17f4ef744944af3e6ba21197f535a22" exitCode=2 Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.897311 4925 generic.go:334] "Generic (PLEG): container finished" podID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerID="20147ccdd738512e31471a3de7494d6f4a8893115838c6066724b22e28a0829f" exitCode=0 Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.897332 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dda94bdd-c210-4398-abf8-e1444f6c2cca","Type":"ContainerDied","Data":"253be3ee43094eefa355ad5eafb649ce0833ef6bd6bde9c41cd8a9c4131be408"} Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.897358 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dda94bdd-c210-4398-abf8-e1444f6c2cca","Type":"ContainerDied","Data":"f6aba3db89a743f8686fef69f7aa2293c17f4ef744944af3e6ba21197f535a22"} Feb 02 11:21:59 crc kubenswrapper[4925]: I0202 11:21:59.897370 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dda94bdd-c210-4398-abf8-e1444f6c2cca","Type":"ContainerDied","Data":"20147ccdd738512e31471a3de7494d6f4a8893115838c6066724b22e28a0829f"} Feb 02 11:22:00 crc kubenswrapper[4925]: I0202 11:22:00.673579 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c60f25de-220a-4eb1-b1da-30faf1a27cf8" path="/var/lib/kubelet/pods/c60f25de-220a-4eb1-b1da-30faf1a27cf8/volumes" Feb 02 11:22:00 crc kubenswrapper[4925]: I0202 11:22:00.906567 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f","Type":"ContainerStarted","Data":"1d0da35a248315e7902a685d051cb4ae2647c329f1f1ceb4bb476fea5a4edbd2"} Feb 02 11:22:00 crc kubenswrapper[4925]: I0202 11:22:00.906735 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 11:22:00 crc kubenswrapper[4925]: I0202 11:22:00.934706 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.479353459 podStartE2EDuration="2.934683021s" podCreationTimestamp="2026-02-02 11:21:58 +0000 UTC" firstStartedPulling="2026-02-02 11:21:59.73494838 +0000 UTC m=+1496.739197352" lastFinishedPulling="2026-02-02 11:22:00.190277952 +0000 UTC m=+1497.194526914" observedRunningTime="2026-02-02 11:22:00.923475389 +0000 UTC m=+1497.927724351" watchObservedRunningTime="2026-02-02 11:22:00.934683021 +0000 UTC m=+1497.938931983" Feb 02 11:22:03 crc kubenswrapper[4925]: I0202 11:22:03.937548 4925 generic.go:334] "Generic (PLEG): container finished" podID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerID="fa1b2697c14f323d290321e9d14713706124d25a61f5a20a2737fd0b378768a7" exitCode=0 Feb 02 11:22:03 crc kubenswrapper[4925]: I0202 11:22:03.937606 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dda94bdd-c210-4398-abf8-e1444f6c2cca","Type":"ContainerDied","Data":"fa1b2697c14f323d290321e9d14713706124d25a61f5a20a2737fd0b378768a7"} Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.152346 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.174448 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-combined-ca-bundle\") pod \"dda94bdd-c210-4398-abf8-e1444f6c2cca\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.174530 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-447qz\" (UniqueName: \"kubernetes.io/projected/dda94bdd-c210-4398-abf8-e1444f6c2cca-kube-api-access-447qz\") pod \"dda94bdd-c210-4398-abf8-e1444f6c2cca\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.174571 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-config-data\") pod \"dda94bdd-c210-4398-abf8-e1444f6c2cca\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.174604 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dda94bdd-c210-4398-abf8-e1444f6c2cca-log-httpd\") pod \"dda94bdd-c210-4398-abf8-e1444f6c2cca\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.174685 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-scripts\") pod \"dda94bdd-c210-4398-abf8-e1444f6c2cca\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.174701 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-sg-core-conf-yaml\") pod \"dda94bdd-c210-4398-abf8-e1444f6c2cca\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.174737 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dda94bdd-c210-4398-abf8-e1444f6c2cca-run-httpd\") pod \"dda94bdd-c210-4398-abf8-e1444f6c2cca\" (UID: \"dda94bdd-c210-4398-abf8-e1444f6c2cca\") " Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.175369 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dda94bdd-c210-4398-abf8-e1444f6c2cca-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dda94bdd-c210-4398-abf8-e1444f6c2cca" (UID: "dda94bdd-c210-4398-abf8-e1444f6c2cca"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.175686 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dda94bdd-c210-4398-abf8-e1444f6c2cca-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dda94bdd-c210-4398-abf8-e1444f6c2cca" (UID: "dda94bdd-c210-4398-abf8-e1444f6c2cca"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.188333 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-scripts" (OuterVolumeSpecName: "scripts") pod "dda94bdd-c210-4398-abf8-e1444f6c2cca" (UID: "dda94bdd-c210-4398-abf8-e1444f6c2cca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.189051 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dda94bdd-c210-4398-abf8-e1444f6c2cca-kube-api-access-447qz" (OuterVolumeSpecName: "kube-api-access-447qz") pod "dda94bdd-c210-4398-abf8-e1444f6c2cca" (UID: "dda94bdd-c210-4398-abf8-e1444f6c2cca"). InnerVolumeSpecName "kube-api-access-447qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.219688 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dda94bdd-c210-4398-abf8-e1444f6c2cca" (UID: "dda94bdd-c210-4398-abf8-e1444f6c2cca"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.265485 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dda94bdd-c210-4398-abf8-e1444f6c2cca" (UID: "dda94bdd-c210-4398-abf8-e1444f6c2cca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.276342 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.276372 4925 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.276383 4925 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dda94bdd-c210-4398-abf8-e1444f6c2cca-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.276392 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.276401 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-447qz\" (UniqueName: \"kubernetes.io/projected/dda94bdd-c210-4398-abf8-e1444f6c2cca-kube-api-access-447qz\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.276410 4925 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dda94bdd-c210-4398-abf8-e1444f6c2cca-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.295353 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-config-data" (OuterVolumeSpecName: "config-data") pod "dda94bdd-c210-4398-abf8-e1444f6c2cca" (UID: "dda94bdd-c210-4398-abf8-e1444f6c2cca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.377942 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda94bdd-c210-4398-abf8-e1444f6c2cca-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.401300 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.401721 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.408680 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.428182 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.429301 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.429518 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.437574 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.947681 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dda94bdd-c210-4398-abf8-e1444f6c2cca","Type":"ContainerDied","Data":"16bf7eaa4a424e22524b7e170678e27e577591d27699bda8fcff44ec230df620"} Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.948838 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.948294 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.949378 4925 scope.go:117] "RemoveContainer" containerID="253be3ee43094eefa355ad5eafb649ce0833ef6bd6bde9c41cd8a9c4131be408" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.956565 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.956654 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.974571 4925 scope.go:117] "RemoveContainer" containerID="f6aba3db89a743f8686fef69f7aa2293c17f4ef744944af3e6ba21197f535a22" Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.975520 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:22:04 crc kubenswrapper[4925]: I0202 11:22:04.985343 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.009091 4925 scope.go:117] "RemoveContainer" containerID="fa1b2697c14f323d290321e9d14713706124d25a61f5a20a2737fd0b378768a7" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.016130 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:22:05 crc kubenswrapper[4925]: E0202 11:22:05.016545 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerName="ceilometer-central-agent" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.016571 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerName="ceilometer-central-agent" Feb 02 11:22:05 crc kubenswrapper[4925]: E0202 11:22:05.016585 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerName="ceilometer-notification-agent" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.016594 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerName="ceilometer-notification-agent" Feb 02 11:22:05 crc kubenswrapper[4925]: E0202 11:22:05.016608 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerName="sg-core" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.016615 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerName="sg-core" Feb 02 11:22:05 crc kubenswrapper[4925]: E0202 11:22:05.016626 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerName="proxy-httpd" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.016633 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerName="proxy-httpd" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.016836 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerName="proxy-httpd" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.016850 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerName="ceilometer-notification-agent" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.016865 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerName="ceilometer-central-agent" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.016878 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="dda94bdd-c210-4398-abf8-e1444f6c2cca" containerName="sg-core" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.018403 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.020447 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.021560 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.024899 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.054966 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.058283 4925 scope.go:117] "RemoveContainer" containerID="20147ccdd738512e31471a3de7494d6f4a8893115838c6066724b22e28a0829f" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.194972 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-config-data\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.196358 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-run-httpd\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.196436 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.196491 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-log-httpd\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.196622 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sj8c\" (UniqueName: \"kubernetes.io/projected/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-kube-api-access-4sj8c\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.196660 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.196689 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.196718 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-scripts\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.297952 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sj8c\" (UniqueName: \"kubernetes.io/projected/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-kube-api-access-4sj8c\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.298009 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.298035 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.298057 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-scripts\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.298105 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-config-data\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.298149 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-run-httpd\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.298175 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.298202 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-log-httpd\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.298733 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-log-httpd\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.301254 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-run-httpd\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.302380 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.303379 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.305700 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-scripts\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.306440 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-config-data\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.306621 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.325580 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sj8c\" (UniqueName: \"kubernetes.io/projected/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-kube-api-access-4sj8c\") pod \"ceilometer-0\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.341092 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.811000 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:22:05 crc kubenswrapper[4925]: I0202 11:22:05.960758 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d","Type":"ContainerStarted","Data":"cd490073537e39ef2fa54da3850c5e4658bd0768cbbad46e2d57cf62c1c680f3"} Feb 02 11:22:06 crc kubenswrapper[4925]: I0202 11:22:06.674748 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dda94bdd-c210-4398-abf8-e1444f6c2cca" path="/var/lib/kubelet/pods/dda94bdd-c210-4398-abf8-e1444f6c2cca/volumes" Feb 02 11:22:06 crc kubenswrapper[4925]: I0202 11:22:06.975132 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d","Type":"ContainerStarted","Data":"bf4acea1d76a24c763dd57c8b8848f75fc99bfe52fd5e968bee4a7945895e282"} Feb 02 11:22:07 crc kubenswrapper[4925]: I0202 11:22:07.985041 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d","Type":"ContainerStarted","Data":"a70c0517f26283b1aac165561f316a3dbd9462d251fc85201a911c8f27c6c6ad"} Feb 02 11:22:08 crc kubenswrapper[4925]: I0202 11:22:08.993971 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d","Type":"ContainerStarted","Data":"7cba57b1454de04b102aad779a5243e307f47a79b1c937b814177137d1c62b07"} Feb 02 11:22:09 crc kubenswrapper[4925]: I0202 11:22:09.276692 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 11:22:11 crc kubenswrapper[4925]: I0202 11:22:11.013232 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d","Type":"ContainerStarted","Data":"2d0262c8ab7349e23b6b642e75f5d551f5697e295c074518a98d854a010f9dfa"} Feb 02 11:22:11 crc kubenswrapper[4925]: I0202 11:22:11.013705 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 11:22:11 crc kubenswrapper[4925]: I0202 11:22:11.033239 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.167202377 podStartE2EDuration="7.033217795s" podCreationTimestamp="2026-02-02 11:22:04 +0000 UTC" firstStartedPulling="2026-02-02 11:22:05.81364555 +0000 UTC m=+1502.817894512" lastFinishedPulling="2026-02-02 11:22:10.679660968 +0000 UTC m=+1507.683909930" observedRunningTime="2026-02-02 11:22:11.031040896 +0000 UTC m=+1508.035289858" watchObservedRunningTime="2026-02-02 11:22:11.033217795 +0000 UTC m=+1508.037466757" Feb 02 11:22:35 crc kubenswrapper[4925]: I0202 11:22:35.349027 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 11:22:44 crc kubenswrapper[4925]: I0202 11:22:44.297054 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:22:45 crc kubenswrapper[4925]: I0202 11:22:45.076987 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:22:48 crc kubenswrapper[4925]: I0202 11:22:48.470898 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0d6b9691-80b3-418b-a4c7-fc80c0438123" containerName="rabbitmq" containerID="cri-o://00accac3619592cb6bc025055707340df438257bf5a585dca3067719e7b58d7d" gracePeriod=604796 Feb 02 11:22:49 crc kubenswrapper[4925]: I0202 11:22:49.309339 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="435dc982-a475-4753-81d0-58bff20a6f17" containerName="rabbitmq" containerID="cri-o://259036d7ce6e79450ab157c656184e2e6dee06072870b59f8a294d9030a0bab1" gracePeriod=604796 Feb 02 11:22:49 crc kubenswrapper[4925]: I0202 11:22:49.664862 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-74xg6"] Feb 02 11:22:49 crc kubenswrapper[4925]: I0202 11:22:49.667308 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74xg6" Feb 02 11:22:49 crc kubenswrapper[4925]: I0202 11:22:49.674892 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74xg6"] Feb 02 11:22:49 crc kubenswrapper[4925]: I0202 11:22:49.799967 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtdvg\" (UniqueName: \"kubernetes.io/projected/1c662967-b951-42e2-a1b2-87f796f4ae48-kube-api-access-mtdvg\") pod \"redhat-marketplace-74xg6\" (UID: \"1c662967-b951-42e2-a1b2-87f796f4ae48\") " pod="openshift-marketplace/redhat-marketplace-74xg6" Feb 02 11:22:49 crc kubenswrapper[4925]: I0202 11:22:49.800412 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c662967-b951-42e2-a1b2-87f796f4ae48-catalog-content\") pod \"redhat-marketplace-74xg6\" (UID: \"1c662967-b951-42e2-a1b2-87f796f4ae48\") " pod="openshift-marketplace/redhat-marketplace-74xg6" Feb 02 11:22:49 crc kubenswrapper[4925]: I0202 11:22:49.800457 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c662967-b951-42e2-a1b2-87f796f4ae48-utilities\") pod \"redhat-marketplace-74xg6\" (UID: \"1c662967-b951-42e2-a1b2-87f796f4ae48\") " pod="openshift-marketplace/redhat-marketplace-74xg6" Feb 02 11:22:49 crc kubenswrapper[4925]: I0202 11:22:49.902606 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtdvg\" (UniqueName: \"kubernetes.io/projected/1c662967-b951-42e2-a1b2-87f796f4ae48-kube-api-access-mtdvg\") pod \"redhat-marketplace-74xg6\" (UID: \"1c662967-b951-42e2-a1b2-87f796f4ae48\") " pod="openshift-marketplace/redhat-marketplace-74xg6" Feb 02 11:22:49 crc kubenswrapper[4925]: I0202 11:22:49.902674 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c662967-b951-42e2-a1b2-87f796f4ae48-catalog-content\") pod \"redhat-marketplace-74xg6\" (UID: \"1c662967-b951-42e2-a1b2-87f796f4ae48\") " pod="openshift-marketplace/redhat-marketplace-74xg6" Feb 02 11:22:49 crc kubenswrapper[4925]: I0202 11:22:49.902709 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c662967-b951-42e2-a1b2-87f796f4ae48-utilities\") pod \"redhat-marketplace-74xg6\" (UID: \"1c662967-b951-42e2-a1b2-87f796f4ae48\") " pod="openshift-marketplace/redhat-marketplace-74xg6" Feb 02 11:22:49 crc kubenswrapper[4925]: I0202 11:22:49.903389 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c662967-b951-42e2-a1b2-87f796f4ae48-catalog-content\") pod \"redhat-marketplace-74xg6\" (UID: \"1c662967-b951-42e2-a1b2-87f796f4ae48\") " pod="openshift-marketplace/redhat-marketplace-74xg6" Feb 02 11:22:49 crc kubenswrapper[4925]: I0202 11:22:49.903409 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c662967-b951-42e2-a1b2-87f796f4ae48-utilities\") pod \"redhat-marketplace-74xg6\" (UID: \"1c662967-b951-42e2-a1b2-87f796f4ae48\") " pod="openshift-marketplace/redhat-marketplace-74xg6" Feb 02 11:22:49 crc kubenswrapper[4925]: I0202 11:22:49.931928 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtdvg\" (UniqueName: \"kubernetes.io/projected/1c662967-b951-42e2-a1b2-87f796f4ae48-kube-api-access-mtdvg\") pod \"redhat-marketplace-74xg6\" (UID: \"1c662967-b951-42e2-a1b2-87f796f4ae48\") " pod="openshift-marketplace/redhat-marketplace-74xg6" Feb 02 11:22:49 crc kubenswrapper[4925]: I0202 11:22:49.997615 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74xg6" Feb 02 11:22:50 crc kubenswrapper[4925]: I0202 11:22:50.488599 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74xg6"] Feb 02 11:22:51 crc kubenswrapper[4925]: I0202 11:22:51.330342 4925 generic.go:334] "Generic (PLEG): container finished" podID="1c662967-b951-42e2-a1b2-87f796f4ae48" containerID="c8a71c41257e5406183ae55f54a8c1c4508d28e9ac51bc8a9e6769e9d6e8f0a0" exitCode=0 Feb 02 11:22:51 crc kubenswrapper[4925]: I0202 11:22:51.330429 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74xg6" event={"ID":"1c662967-b951-42e2-a1b2-87f796f4ae48","Type":"ContainerDied","Data":"c8a71c41257e5406183ae55f54a8c1c4508d28e9ac51bc8a9e6769e9d6e8f0a0"} Feb 02 11:22:51 crc kubenswrapper[4925]: I0202 11:22:51.330726 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74xg6" event={"ID":"1c662967-b951-42e2-a1b2-87f796f4ae48","Type":"ContainerStarted","Data":"d83754918e723366a6f8e092a43a2d21d4dffcf0a4061704c0a0eb88b932c14b"} Feb 02 11:22:52 crc kubenswrapper[4925]: I0202 11:22:52.345319 4925 generic.go:334] "Generic (PLEG): container finished" podID="1c662967-b951-42e2-a1b2-87f796f4ae48" containerID="66907cab01f27aaae5698018a4e8e2e755e8d5a20c313c131625c982897e9779" exitCode=0 Feb 02 11:22:52 crc kubenswrapper[4925]: I0202 11:22:52.345420 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74xg6" event={"ID":"1c662967-b951-42e2-a1b2-87f796f4ae48","Type":"ContainerDied","Data":"66907cab01f27aaae5698018a4e8e2e755e8d5a20c313c131625c982897e9779"} Feb 02 11:22:53 crc kubenswrapper[4925]: I0202 11:22:53.355584 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74xg6" event={"ID":"1c662967-b951-42e2-a1b2-87f796f4ae48","Type":"ContainerStarted","Data":"fb9eeec2bf240085234a47bcff1756c95729f442531a1a472d406347d7b4efb9"} Feb 02 11:22:53 crc kubenswrapper[4925]: I0202 11:22:53.382534 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-74xg6" podStartSLOduration=2.894161909 podStartE2EDuration="4.382510837s" podCreationTimestamp="2026-02-02 11:22:49 +0000 UTC" firstStartedPulling="2026-02-02 11:22:51.331964914 +0000 UTC m=+1548.336213876" lastFinishedPulling="2026-02-02 11:22:52.820313842 +0000 UTC m=+1549.824562804" observedRunningTime="2026-02-02 11:22:53.377711617 +0000 UTC m=+1550.381960579" watchObservedRunningTime="2026-02-02 11:22:53.382510837 +0000 UTC m=+1550.386759799" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.307471 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.386404 4925 generic.go:334] "Generic (PLEG): container finished" podID="0d6b9691-80b3-418b-a4c7-fc80c0438123" containerID="00accac3619592cb6bc025055707340df438257bf5a585dca3067719e7b58d7d" exitCode=0 Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.386713 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0d6b9691-80b3-418b-a4c7-fc80c0438123","Type":"ContainerDied","Data":"00accac3619592cb6bc025055707340df438257bf5a585dca3067719e7b58d7d"} Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.386923 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0d6b9691-80b3-418b-a4c7-fc80c0438123","Type":"ContainerDied","Data":"662a4022280c3ce465866739ffa66ee8b75ebbca99846bbdc7f241d080127d22"} Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.386945 4925 scope.go:117] "RemoveContainer" containerID="00accac3619592cb6bc025055707340df438257bf5a585dca3067719e7b58d7d" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.387133 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.408461 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0d6b9691-80b3-418b-a4c7-fc80c0438123-pod-info\") pod \"0d6b9691-80b3-418b-a4c7-fc80c0438123\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.408547 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9sbc\" (UniqueName: \"kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-kube-api-access-l9sbc\") pod \"0d6b9691-80b3-418b-a4c7-fc80c0438123\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.408591 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-confd\") pod \"0d6b9691-80b3-418b-a4c7-fc80c0438123\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.408627 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-plugins\") pod \"0d6b9691-80b3-418b-a4c7-fc80c0438123\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.408650 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-erlang-cookie\") pod \"0d6b9691-80b3-418b-a4c7-fc80c0438123\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.408670 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-plugins-conf\") pod \"0d6b9691-80b3-418b-a4c7-fc80c0438123\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.408721 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0d6b9691-80b3-418b-a4c7-fc80c0438123-erlang-cookie-secret\") pod \"0d6b9691-80b3-418b-a4c7-fc80c0438123\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.408799 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-tls\") pod \"0d6b9691-80b3-418b-a4c7-fc80c0438123\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.408836 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-server-conf\") pod \"0d6b9691-80b3-418b-a4c7-fc80c0438123\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.408876 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"0d6b9691-80b3-418b-a4c7-fc80c0438123\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.408892 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-config-data\") pod \"0d6b9691-80b3-418b-a4c7-fc80c0438123\" (UID: \"0d6b9691-80b3-418b-a4c7-fc80c0438123\") " Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.411842 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0d6b9691-80b3-418b-a4c7-fc80c0438123" (UID: "0d6b9691-80b3-418b-a4c7-fc80c0438123"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.412231 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0d6b9691-80b3-418b-a4c7-fc80c0438123" (UID: "0d6b9691-80b3-418b-a4c7-fc80c0438123"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.412407 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0d6b9691-80b3-418b-a4c7-fc80c0438123" (UID: "0d6b9691-80b3-418b-a4c7-fc80c0438123"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.421614 4925 scope.go:117] "RemoveContainer" containerID="4b576d38c799222ec6a0ca164153d6ae54494ef632ee6de450da84c205e09e73" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.421787 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-kube-api-access-l9sbc" (OuterVolumeSpecName: "kube-api-access-l9sbc") pod "0d6b9691-80b3-418b-a4c7-fc80c0438123" (UID: "0d6b9691-80b3-418b-a4c7-fc80c0438123"). InnerVolumeSpecName "kube-api-access-l9sbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.421993 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d6b9691-80b3-418b-a4c7-fc80c0438123-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0d6b9691-80b3-418b-a4c7-fc80c0438123" (UID: "0d6b9691-80b3-418b-a4c7-fc80c0438123"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.423114 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0d6b9691-80b3-418b-a4c7-fc80c0438123-pod-info" (OuterVolumeSpecName: "pod-info") pod "0d6b9691-80b3-418b-a4c7-fc80c0438123" (UID: "0d6b9691-80b3-418b-a4c7-fc80c0438123"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.431255 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0d6b9691-80b3-418b-a4c7-fc80c0438123" (UID: "0d6b9691-80b3-418b-a4c7-fc80c0438123"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.431646 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "0d6b9691-80b3-418b-a4c7-fc80c0438123" (UID: "0d6b9691-80b3-418b-a4c7-fc80c0438123"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.445873 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-config-data" (OuterVolumeSpecName: "config-data") pod "0d6b9691-80b3-418b-a4c7-fc80c0438123" (UID: "0d6b9691-80b3-418b-a4c7-fc80c0438123"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.485417 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-server-conf" (OuterVolumeSpecName: "server-conf") pod "0d6b9691-80b3-418b-a4c7-fc80c0438123" (UID: "0d6b9691-80b3-418b-a4c7-fc80c0438123"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.510866 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9sbc\" (UniqueName: \"kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-kube-api-access-l9sbc\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.510906 4925 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.510916 4925 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.510924 4925 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.510932 4925 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0d6b9691-80b3-418b-a4c7-fc80c0438123-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.510941 4925 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.510949 4925 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.510977 4925 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.510986 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d6b9691-80b3-418b-a4c7-fc80c0438123-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.510994 4925 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0d6b9691-80b3-418b-a4c7-fc80c0438123-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.547341 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0d6b9691-80b3-418b-a4c7-fc80c0438123" (UID: "0d6b9691-80b3-418b-a4c7-fc80c0438123"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.551099 4925 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.588879 4925 scope.go:117] "RemoveContainer" containerID="00accac3619592cb6bc025055707340df438257bf5a585dca3067719e7b58d7d" Feb 02 11:22:55 crc kubenswrapper[4925]: E0202 11:22:55.589685 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00accac3619592cb6bc025055707340df438257bf5a585dca3067719e7b58d7d\": container with ID starting with 00accac3619592cb6bc025055707340df438257bf5a585dca3067719e7b58d7d not found: ID does not exist" containerID="00accac3619592cb6bc025055707340df438257bf5a585dca3067719e7b58d7d" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.589762 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00accac3619592cb6bc025055707340df438257bf5a585dca3067719e7b58d7d"} err="failed to get container status \"00accac3619592cb6bc025055707340df438257bf5a585dca3067719e7b58d7d\": rpc error: code = NotFound desc = could not find container \"00accac3619592cb6bc025055707340df438257bf5a585dca3067719e7b58d7d\": container with ID starting with 00accac3619592cb6bc025055707340df438257bf5a585dca3067719e7b58d7d not found: ID does not exist" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.589826 4925 scope.go:117] "RemoveContainer" containerID="4b576d38c799222ec6a0ca164153d6ae54494ef632ee6de450da84c205e09e73" Feb 02 11:22:55 crc kubenswrapper[4925]: E0202 11:22:55.590295 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b576d38c799222ec6a0ca164153d6ae54494ef632ee6de450da84c205e09e73\": container with ID starting with 4b576d38c799222ec6a0ca164153d6ae54494ef632ee6de450da84c205e09e73 not found: ID does not exist" containerID="4b576d38c799222ec6a0ca164153d6ae54494ef632ee6de450da84c205e09e73" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.590349 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b576d38c799222ec6a0ca164153d6ae54494ef632ee6de450da84c205e09e73"} err="failed to get container status \"4b576d38c799222ec6a0ca164153d6ae54494ef632ee6de450da84c205e09e73\": rpc error: code = NotFound desc = could not find container \"4b576d38c799222ec6a0ca164153d6ae54494ef632ee6de450da84c205e09e73\": container with ID starting with 4b576d38c799222ec6a0ca164153d6ae54494ef632ee6de450da84c205e09e73 not found: ID does not exist" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.613892 4925 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.613936 4925 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0d6b9691-80b3-418b-a4c7-fc80c0438123-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.736656 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.745240 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.773425 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:22:55 crc kubenswrapper[4925]: E0202 11:22:55.773971 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6b9691-80b3-418b-a4c7-fc80c0438123" containerName="rabbitmq" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.773998 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6b9691-80b3-418b-a4c7-fc80c0438123" containerName="rabbitmq" Feb 02 11:22:55 crc kubenswrapper[4925]: E0202 11:22:55.774058 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6b9691-80b3-418b-a4c7-fc80c0438123" containerName="setup-container" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.774067 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6b9691-80b3-418b-a4c7-fc80c0438123" containerName="setup-container" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.774814 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d6b9691-80b3-418b-a4c7-fc80c0438123" containerName="rabbitmq" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.778625 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.785958 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.786219 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.786246 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.786329 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.786413 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.786463 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-rbf7n" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.786662 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.793223 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.922275 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f988cc52-4086-4387-971c-ecd4837c512c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.922724 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f988cc52-4086-4387-971c-ecd4837c512c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.922776 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f988cc52-4086-4387-971c-ecd4837c512c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.922795 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f988cc52-4086-4387-971c-ecd4837c512c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.922823 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f988cc52-4086-4387-971c-ecd4837c512c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.922852 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74lfn\" (UniqueName: \"kubernetes.io/projected/f988cc52-4086-4387-971c-ecd4837c512c-kube-api-access-74lfn\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.922902 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.922921 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f988cc52-4086-4387-971c-ecd4837c512c-config-data\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.922939 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f988cc52-4086-4387-971c-ecd4837c512c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.922969 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f988cc52-4086-4387-971c-ecd4837c512c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:55 crc kubenswrapper[4925]: I0202 11:22:55.922983 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f988cc52-4086-4387-971c-ecd4837c512c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.024240 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f988cc52-4086-4387-971c-ecd4837c512c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.024305 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74lfn\" (UniqueName: \"kubernetes.io/projected/f988cc52-4086-4387-971c-ecd4837c512c-kube-api-access-74lfn\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.024355 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.024373 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f988cc52-4086-4387-971c-ecd4837c512c-config-data\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.024389 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f988cc52-4086-4387-971c-ecd4837c512c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.024415 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f988cc52-4086-4387-971c-ecd4837c512c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.024428 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f988cc52-4086-4387-971c-ecd4837c512c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.024449 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f988cc52-4086-4387-971c-ecd4837c512c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.024495 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f988cc52-4086-4387-971c-ecd4837c512c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.024529 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f988cc52-4086-4387-971c-ecd4837c512c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.024547 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f988cc52-4086-4387-971c-ecd4837c512c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.025091 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f988cc52-4086-4387-971c-ecd4837c512c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.026743 4925 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.027684 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f988cc52-4086-4387-971c-ecd4837c512c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.028539 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f988cc52-4086-4387-971c-ecd4837c512c-config-data\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.029421 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f988cc52-4086-4387-971c-ecd4837c512c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.030908 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f988cc52-4086-4387-971c-ecd4837c512c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.036707 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f988cc52-4086-4387-971c-ecd4837c512c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.036723 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f988cc52-4086-4387-971c-ecd4837c512c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.037147 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f988cc52-4086-4387-971c-ecd4837c512c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.038184 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f988cc52-4086-4387-971c-ecd4837c512c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.044830 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74lfn\" (UniqueName: \"kubernetes.io/projected/f988cc52-4086-4387-971c-ecd4837c512c-kube-api-access-74lfn\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.066801 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"f988cc52-4086-4387-971c-ecd4837c512c\") " pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.108711 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.122327 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.227271 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"435dc982-a475-4753-81d0-58bff20a6f17\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.227356 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/435dc982-a475-4753-81d0-58bff20a6f17-erlang-cookie-secret\") pod \"435dc982-a475-4753-81d0-58bff20a6f17\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.227419 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-plugins\") pod \"435dc982-a475-4753-81d0-58bff20a6f17\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.227453 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrhxg\" (UniqueName: \"kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-kube-api-access-nrhxg\") pod \"435dc982-a475-4753-81d0-58bff20a6f17\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.227482 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-confd\") pod \"435dc982-a475-4753-81d0-58bff20a6f17\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.227529 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-plugins-conf\") pod \"435dc982-a475-4753-81d0-58bff20a6f17\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.227560 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-server-conf\") pod \"435dc982-a475-4753-81d0-58bff20a6f17\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.227612 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-config-data\") pod \"435dc982-a475-4753-81d0-58bff20a6f17\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.227629 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-erlang-cookie\") pod \"435dc982-a475-4753-81d0-58bff20a6f17\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.227674 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-tls\") pod \"435dc982-a475-4753-81d0-58bff20a6f17\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.227701 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/435dc982-a475-4753-81d0-58bff20a6f17-pod-info\") pod \"435dc982-a475-4753-81d0-58bff20a6f17\" (UID: \"435dc982-a475-4753-81d0-58bff20a6f17\") " Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.227936 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "435dc982-a475-4753-81d0-58bff20a6f17" (UID: "435dc982-a475-4753-81d0-58bff20a6f17"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.228338 4925 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.228481 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "435dc982-a475-4753-81d0-58bff20a6f17" (UID: "435dc982-a475-4753-81d0-58bff20a6f17"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.228666 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "435dc982-a475-4753-81d0-58bff20a6f17" (UID: "435dc982-a475-4753-81d0-58bff20a6f17"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.237588 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/435dc982-a475-4753-81d0-58bff20a6f17-pod-info" (OuterVolumeSpecName: "pod-info") pod "435dc982-a475-4753-81d0-58bff20a6f17" (UID: "435dc982-a475-4753-81d0-58bff20a6f17"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.237616 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435dc982-a475-4753-81d0-58bff20a6f17-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "435dc982-a475-4753-81d0-58bff20a6f17" (UID: "435dc982-a475-4753-81d0-58bff20a6f17"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.238569 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-kube-api-access-nrhxg" (OuterVolumeSpecName: "kube-api-access-nrhxg") pod "435dc982-a475-4753-81d0-58bff20a6f17" (UID: "435dc982-a475-4753-81d0-58bff20a6f17"). InnerVolumeSpecName "kube-api-access-nrhxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.242920 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "435dc982-a475-4753-81d0-58bff20a6f17" (UID: "435dc982-a475-4753-81d0-58bff20a6f17"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.242972 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "435dc982-a475-4753-81d0-58bff20a6f17" (UID: "435dc982-a475-4753-81d0-58bff20a6f17"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.266105 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-config-data" (OuterVolumeSpecName: "config-data") pod "435dc982-a475-4753-81d0-58bff20a6f17" (UID: "435dc982-a475-4753-81d0-58bff20a6f17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.284364 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-server-conf" (OuterVolumeSpecName: "server-conf") pod "435dc982-a475-4753-81d0-58bff20a6f17" (UID: "435dc982-a475-4753-81d0-58bff20a6f17"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.332885 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrhxg\" (UniqueName: \"kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-kube-api-access-nrhxg\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.333395 4925 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.333433 4925 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.333445 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/435dc982-a475-4753-81d0-58bff20a6f17-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.333458 4925 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.333469 4925 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.333480 4925 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/435dc982-a475-4753-81d0-58bff20a6f17-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.333533 4925 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.333547 4925 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/435dc982-a475-4753-81d0-58bff20a6f17-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.359739 4925 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.369354 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "435dc982-a475-4753-81d0-58bff20a6f17" (UID: "435dc982-a475-4753-81d0-58bff20a6f17"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.409813 4925 generic.go:334] "Generic (PLEG): container finished" podID="435dc982-a475-4753-81d0-58bff20a6f17" containerID="259036d7ce6e79450ab157c656184e2e6dee06072870b59f8a294d9030a0bab1" exitCode=0 Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.409857 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"435dc982-a475-4753-81d0-58bff20a6f17","Type":"ContainerDied","Data":"259036d7ce6e79450ab157c656184e2e6dee06072870b59f8a294d9030a0bab1"} Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.409941 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"435dc982-a475-4753-81d0-58bff20a6f17","Type":"ContainerDied","Data":"2c64223f10e43efac54cf9ed0bab0882368c04e9b77b5be71aaf6da61b9cb061"} Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.409960 4925 scope.go:117] "RemoveContainer" containerID="259036d7ce6e79450ab157c656184e2e6dee06072870b59f8a294d9030a0bab1" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.410089 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.436902 4925 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/435dc982-a475-4753-81d0-58bff20a6f17-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.436941 4925 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.443953 4925 scope.go:117] "RemoveContainer" containerID="34fc9e387138d7a2caf08e6dd7e3e70fa91f331ed160a0b917b941363bbff6f5" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.466241 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.478303 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.488326 4925 scope.go:117] "RemoveContainer" containerID="259036d7ce6e79450ab157c656184e2e6dee06072870b59f8a294d9030a0bab1" Feb 02 11:22:56 crc kubenswrapper[4925]: E0202 11:22:56.489617 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"259036d7ce6e79450ab157c656184e2e6dee06072870b59f8a294d9030a0bab1\": container with ID starting with 259036d7ce6e79450ab157c656184e2e6dee06072870b59f8a294d9030a0bab1 not found: ID does not exist" containerID="259036d7ce6e79450ab157c656184e2e6dee06072870b59f8a294d9030a0bab1" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.489653 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"259036d7ce6e79450ab157c656184e2e6dee06072870b59f8a294d9030a0bab1"} err="failed to get container status \"259036d7ce6e79450ab157c656184e2e6dee06072870b59f8a294d9030a0bab1\": rpc error: code = NotFound desc = could not find container \"259036d7ce6e79450ab157c656184e2e6dee06072870b59f8a294d9030a0bab1\": container with ID starting with 259036d7ce6e79450ab157c656184e2e6dee06072870b59f8a294d9030a0bab1 not found: ID does not exist" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.489676 4925 scope.go:117] "RemoveContainer" containerID="34fc9e387138d7a2caf08e6dd7e3e70fa91f331ed160a0b917b941363bbff6f5" Feb 02 11:22:56 crc kubenswrapper[4925]: E0202 11:22:56.489884 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34fc9e387138d7a2caf08e6dd7e3e70fa91f331ed160a0b917b941363bbff6f5\": container with ID starting with 34fc9e387138d7a2caf08e6dd7e3e70fa91f331ed160a0b917b941363bbff6f5 not found: ID does not exist" containerID="34fc9e387138d7a2caf08e6dd7e3e70fa91f331ed160a0b917b941363bbff6f5" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.489905 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34fc9e387138d7a2caf08e6dd7e3e70fa91f331ed160a0b917b941363bbff6f5"} err="failed to get container status \"34fc9e387138d7a2caf08e6dd7e3e70fa91f331ed160a0b917b941363bbff6f5\": rpc error: code = NotFound desc = could not find container \"34fc9e387138d7a2caf08e6dd7e3e70fa91f331ed160a0b917b941363bbff6f5\": container with ID starting with 34fc9e387138d7a2caf08e6dd7e3e70fa91f331ed160a0b917b941363bbff6f5 not found: ID does not exist" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.490512 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:22:56 crc kubenswrapper[4925]: E0202 11:22:56.490880 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435dc982-a475-4753-81d0-58bff20a6f17" containerName="setup-container" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.490896 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="435dc982-a475-4753-81d0-58bff20a6f17" containerName="setup-container" Feb 02 11:22:56 crc kubenswrapper[4925]: E0202 11:22:56.490930 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435dc982-a475-4753-81d0-58bff20a6f17" containerName="rabbitmq" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.490938 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="435dc982-a475-4753-81d0-58bff20a6f17" containerName="rabbitmq" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.491107 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="435dc982-a475-4753-81d0-58bff20a6f17" containerName="rabbitmq" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.492122 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.494265 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.494461 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.494603 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.494687 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.494783 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.494697 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.495536 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bjhdc" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.499694 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.640998 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f584c201-5eae-46d6-a9c1-b360f5506d24-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.641052 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f584c201-5eae-46d6-a9c1-b360f5506d24-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.641098 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f584c201-5eae-46d6-a9c1-b360f5506d24-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.641130 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f584c201-5eae-46d6-a9c1-b360f5506d24-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.641177 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f584c201-5eae-46d6-a9c1-b360f5506d24-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.641199 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f584c201-5eae-46d6-a9c1-b360f5506d24-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.641254 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vj2z\" (UniqueName: \"kubernetes.io/projected/f584c201-5eae-46d6-a9c1-b360f5506d24-kube-api-access-6vj2z\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.641280 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f584c201-5eae-46d6-a9c1-b360f5506d24-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.641298 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f584c201-5eae-46d6-a9c1-b360f5506d24-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.641316 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.641352 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f584c201-5eae-46d6-a9c1-b360f5506d24-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.677976 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d6b9691-80b3-418b-a4c7-fc80c0438123" path="/var/lib/kubelet/pods/0d6b9691-80b3-418b-a4c7-fc80c0438123/volumes" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.679444 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="435dc982-a475-4753-81d0-58bff20a6f17" path="/var/lib/kubelet/pods/435dc982-a475-4753-81d0-58bff20a6f17/volumes" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.720343 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:22:56 crc kubenswrapper[4925]: W0202 11:22:56.735548 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf988cc52_4086_4387_971c_ecd4837c512c.slice/crio-bacd664d37789ba7df6f84540485c37539a6fa4e66bb4a1cc83501392fd178de WatchSource:0}: Error finding container bacd664d37789ba7df6f84540485c37539a6fa4e66bb4a1cc83501392fd178de: Status 404 returned error can't find the container with id bacd664d37789ba7df6f84540485c37539a6fa4e66bb4a1cc83501392fd178de Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.742503 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f584c201-5eae-46d6-a9c1-b360f5506d24-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.742577 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f584c201-5eae-46d6-a9c1-b360f5506d24-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.742601 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f584c201-5eae-46d6-a9c1-b360f5506d24-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.742651 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f584c201-5eae-46d6-a9c1-b360f5506d24-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.742681 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f584c201-5eae-46d6-a9c1-b360f5506d24-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.742739 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vj2z\" (UniqueName: \"kubernetes.io/projected/f584c201-5eae-46d6-a9c1-b360f5506d24-kube-api-access-6vj2z\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.742776 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f584c201-5eae-46d6-a9c1-b360f5506d24-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.742799 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f584c201-5eae-46d6-a9c1-b360f5506d24-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.742826 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.742881 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f584c201-5eae-46d6-a9c1-b360f5506d24-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.742939 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f584c201-5eae-46d6-a9c1-b360f5506d24-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.744432 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f584c201-5eae-46d6-a9c1-b360f5506d24-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.745514 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f584c201-5eae-46d6-a9c1-b360f5506d24-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.747723 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f584c201-5eae-46d6-a9c1-b360f5506d24-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.748359 4925 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.748513 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f584c201-5eae-46d6-a9c1-b360f5506d24-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.749250 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f584c201-5eae-46d6-a9c1-b360f5506d24-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.752911 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f584c201-5eae-46d6-a9c1-b360f5506d24-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.755339 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f584c201-5eae-46d6-a9c1-b360f5506d24-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.755387 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f584c201-5eae-46d6-a9c1-b360f5506d24-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.759119 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f584c201-5eae-46d6-a9c1-b360f5506d24-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.769671 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vj2z\" (UniqueName: \"kubernetes.io/projected/f584c201-5eae-46d6-a9c1-b360f5506d24-kube-api-access-6vj2z\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.784981 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f584c201-5eae-46d6-a9c1-b360f5506d24\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:56 crc kubenswrapper[4925]: I0202 11:22:56.816590 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:22:57 crc kubenswrapper[4925]: I0202 11:22:57.247064 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:22:57 crc kubenswrapper[4925]: I0202 11:22:57.424768 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f988cc52-4086-4387-971c-ecd4837c512c","Type":"ContainerStarted","Data":"bacd664d37789ba7df6f84540485c37539a6fa4e66bb4a1cc83501392fd178de"} Feb 02 11:22:57 crc kubenswrapper[4925]: I0202 11:22:57.428648 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f584c201-5eae-46d6-a9c1-b360f5506d24","Type":"ContainerStarted","Data":"d392167bbd8d8054333fd6a3ce6a6637a92ebc3e946f5bd567871e3e4dd83e5e"} Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.008431 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-5nv8x"] Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.009823 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.016109 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.024870 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-5nv8x"] Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.169175 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-config\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.169264 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.169323 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcmcb\" (UniqueName: \"kubernetes.io/projected/c3b31ff7-5312-402c-b0b5-957b0b242143-kube-api-access-lcmcb\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.169369 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.169413 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.169533 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.217596 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-5nv8x"] Feb 02 11:22:58 crc kubenswrapper[4925]: E0202 11:22:58.218286 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-lcmcb openstack-edpm-ipam ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" podUID="c3b31ff7-5312-402c-b0b5-957b0b242143" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.249019 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-r8n85"] Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.250913 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.265937 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-r8n85"] Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.270624 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcmcb\" (UniqueName: \"kubernetes.io/projected/c3b31ff7-5312-402c-b0b5-957b0b242143-kube-api-access-lcmcb\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.270663 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.270690 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.270714 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.270804 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-config\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.270839 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.271787 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.272325 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.272825 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.275203 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.275365 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-config\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.342593 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcmcb\" (UniqueName: \"kubernetes.io/projected/c3b31ff7-5312-402c-b0b5-957b0b242143-kube-api-access-lcmcb\") pod \"dnsmasq-dns-6447ccbd8f-5nv8x\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.378803 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfhl6\" (UniqueName: \"kubernetes.io/projected/d9244868-1883-4530-890c-4858c6733192-kube-api-access-qfhl6\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.378857 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-config\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.378879 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.378954 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.378990 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.379105 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.438784 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.438808 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f988cc52-4086-4387-971c-ecd4837c512c","Type":"ContainerStarted","Data":"6a4579ef1be6a1352d4c64fa63191ac9e3fd2798017cfb25ac4f7bd46e52e103"} Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.452888 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.481423 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.481483 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfhl6\" (UniqueName: \"kubernetes.io/projected/d9244868-1883-4530-890c-4858c6733192-kube-api-access-qfhl6\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.481511 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-config\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.481528 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.481596 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.481639 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.482400 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.482462 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-config\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.482491 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.482542 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.482644 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.506907 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfhl6\" (UniqueName: \"kubernetes.io/projected/d9244868-1883-4530-890c-4858c6733192-kube-api-access-qfhl6\") pod \"dnsmasq-dns-864d5fc68c-r8n85\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.576851 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.583457 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcmcb\" (UniqueName: \"kubernetes.io/projected/c3b31ff7-5312-402c-b0b5-957b0b242143-kube-api-access-lcmcb\") pod \"c3b31ff7-5312-402c-b0b5-957b0b242143\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.583565 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-dns-svc\") pod \"c3b31ff7-5312-402c-b0b5-957b0b242143\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.583604 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-ovsdbserver-nb\") pod \"c3b31ff7-5312-402c-b0b5-957b0b242143\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.583681 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-openstack-edpm-ipam\") pod \"c3b31ff7-5312-402c-b0b5-957b0b242143\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.583752 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-config\") pod \"c3b31ff7-5312-402c-b0b5-957b0b242143\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.583767 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-ovsdbserver-sb\") pod \"c3b31ff7-5312-402c-b0b5-957b0b242143\" (UID: \"c3b31ff7-5312-402c-b0b5-957b0b242143\") " Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.585262 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3b31ff7-5312-402c-b0b5-957b0b242143" (UID: "c3b31ff7-5312-402c-b0b5-957b0b242143"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.585351 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c3b31ff7-5312-402c-b0b5-957b0b242143" (UID: "c3b31ff7-5312-402c-b0b5-957b0b242143"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.585541 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3b31ff7-5312-402c-b0b5-957b0b242143" (UID: "c3b31ff7-5312-402c-b0b5-957b0b242143"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.585614 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-config" (OuterVolumeSpecName: "config") pod "c3b31ff7-5312-402c-b0b5-957b0b242143" (UID: "c3b31ff7-5312-402c-b0b5-957b0b242143"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.585755 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3b31ff7-5312-402c-b0b5-957b0b242143" (UID: "c3b31ff7-5312-402c-b0b5-957b0b242143"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.589354 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b31ff7-5312-402c-b0b5-957b0b242143-kube-api-access-lcmcb" (OuterVolumeSpecName: "kube-api-access-lcmcb") pod "c3b31ff7-5312-402c-b0b5-957b0b242143" (UID: "c3b31ff7-5312-402c-b0b5-957b0b242143"). InnerVolumeSpecName "kube-api-access-lcmcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.685754 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.686003 4925 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.686015 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.686023 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.686034 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcmcb\" (UniqueName: \"kubernetes.io/projected/c3b31ff7-5312-402c-b0b5-957b0b242143-kube-api-access-lcmcb\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:58 crc kubenswrapper[4925]: I0202 11:22:58.686045 4925 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3b31ff7-5312-402c-b0b5-957b0b242143-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.032664 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-r8n85"] Feb 02 11:22:59 crc kubenswrapper[4925]: W0202 11:22:59.036281 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9244868_1883_4530_890c_4858c6733192.slice/crio-4a9a76f845d869116f908796032a9b3955023696c56f48fdb1361e437380dbc2 WatchSource:0}: Error finding container 4a9a76f845d869116f908796032a9b3955023696c56f48fdb1361e437380dbc2: Status 404 returned error can't find the container with id 4a9a76f845d869116f908796032a9b3955023696c56f48fdb1361e437380dbc2 Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.295209 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48"] Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.296912 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.299089 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.299326 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.301144 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.301297 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.312379 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48"] Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.403161 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scg48\" (UID: \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.403243 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k44pq\" (UniqueName: \"kubernetes.io/projected/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-kube-api-access-k44pq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scg48\" (UID: \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.403314 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scg48\" (UID: \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.403361 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scg48\" (UID: \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.447105 4925 generic.go:334] "Generic (PLEG): container finished" podID="d9244868-1883-4530-890c-4858c6733192" containerID="4f4ceb0bfa28f8e12e0e79069b88f740702f638ee71bcfc48ccb806908a72a83" exitCode=0 Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.447175 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" event={"ID":"d9244868-1883-4530-890c-4858c6733192","Type":"ContainerDied","Data":"4f4ceb0bfa28f8e12e0e79069b88f740702f638ee71bcfc48ccb806908a72a83"} Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.447201 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" event={"ID":"d9244868-1883-4530-890c-4858c6733192","Type":"ContainerStarted","Data":"4a9a76f845d869116f908796032a9b3955023696c56f48fdb1361e437380dbc2"} Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.448920 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-5nv8x" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.448912 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f584c201-5eae-46d6-a9c1-b360f5506d24","Type":"ContainerStarted","Data":"1a12b4ae7e939c87785e9ea0cd7c7fe4b926259d3aff12ef4c883453313856e5"} Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.525456 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scg48\" (UID: \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.525672 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scg48\" (UID: \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.525801 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k44pq\" (UniqueName: \"kubernetes.io/projected/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-kube-api-access-k44pq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scg48\" (UID: \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.525963 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scg48\" (UID: \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.531635 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scg48\" (UID: \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.532015 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scg48\" (UID: \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.543226 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scg48\" (UID: \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.562383 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k44pq\" (UniqueName: \"kubernetes.io/projected/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-kube-api-access-k44pq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-scg48\" (UID: \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.562485 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-5nv8x"] Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.571533 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-5nv8x"] Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.612138 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.998403 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-74xg6" Feb 02 11:22:59 crc kubenswrapper[4925]: I0202 11:22:59.998933 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-74xg6" Feb 02 11:23:00 crc kubenswrapper[4925]: I0202 11:23:00.045328 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-74xg6" Feb 02 11:23:00 crc kubenswrapper[4925]: I0202 11:23:00.202569 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48"] Feb 02 11:23:00 crc kubenswrapper[4925]: W0202 11:23:00.208615 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfad6a60e_a28e_4942_8fa2_6cceb8e1b146.slice/crio-be5743b81ec9581b08261e22ad9db10fcb9be83558fb168175510dce5ba97c5a WatchSource:0}: Error finding container be5743b81ec9581b08261e22ad9db10fcb9be83558fb168175510dce5ba97c5a: Status 404 returned error can't find the container with id be5743b81ec9581b08261e22ad9db10fcb9be83558fb168175510dce5ba97c5a Feb 02 11:23:00 crc kubenswrapper[4925]: I0202 11:23:00.458262 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" event={"ID":"fad6a60e-a28e-4942-8fa2-6cceb8e1b146","Type":"ContainerStarted","Data":"be5743b81ec9581b08261e22ad9db10fcb9be83558fb168175510dce5ba97c5a"} Feb 02 11:23:00 crc kubenswrapper[4925]: I0202 11:23:00.460410 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" event={"ID":"d9244868-1883-4530-890c-4858c6733192","Type":"ContainerStarted","Data":"837ea4c47b02e3be518e1aaa904680fac3ba4d04d8eca0a53a5d4a9942d7b41e"} Feb 02 11:23:00 crc kubenswrapper[4925]: I0202 11:23:00.461037 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:23:00 crc kubenswrapper[4925]: I0202 11:23:00.488301 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" podStartSLOduration=2.48827911 podStartE2EDuration="2.48827911s" podCreationTimestamp="2026-02-02 11:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:23:00.478177947 +0000 UTC m=+1557.482426909" watchObservedRunningTime="2026-02-02 11:23:00.48827911 +0000 UTC m=+1557.492528072" Feb 02 11:23:00 crc kubenswrapper[4925]: I0202 11:23:00.516207 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-74xg6" Feb 02 11:23:00 crc kubenswrapper[4925]: I0202 11:23:00.575198 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74xg6"] Feb 02 11:23:00 crc kubenswrapper[4925]: I0202 11:23:00.675124 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b31ff7-5312-402c-b0b5-957b0b242143" path="/var/lib/kubelet/pods/c3b31ff7-5312-402c-b0b5-957b0b242143/volumes" Feb 02 11:23:02 crc kubenswrapper[4925]: I0202 11:23:02.475142 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-74xg6" podUID="1c662967-b951-42e2-a1b2-87f796f4ae48" containerName="registry-server" containerID="cri-o://fb9eeec2bf240085234a47bcff1756c95729f442531a1a472d406347d7b4efb9" gracePeriod=2 Feb 02 11:23:02 crc kubenswrapper[4925]: I0202 11:23:02.976808 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74xg6" Feb 02 11:23:03 crc kubenswrapper[4925]: I0202 11:23:03.119260 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c662967-b951-42e2-a1b2-87f796f4ae48-catalog-content\") pod \"1c662967-b951-42e2-a1b2-87f796f4ae48\" (UID: \"1c662967-b951-42e2-a1b2-87f796f4ae48\") " Feb 02 11:23:03 crc kubenswrapper[4925]: I0202 11:23:03.119320 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c662967-b951-42e2-a1b2-87f796f4ae48-utilities\") pod \"1c662967-b951-42e2-a1b2-87f796f4ae48\" (UID: \"1c662967-b951-42e2-a1b2-87f796f4ae48\") " Feb 02 11:23:03 crc kubenswrapper[4925]: I0202 11:23:03.119400 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtdvg\" (UniqueName: \"kubernetes.io/projected/1c662967-b951-42e2-a1b2-87f796f4ae48-kube-api-access-mtdvg\") pod \"1c662967-b951-42e2-a1b2-87f796f4ae48\" (UID: \"1c662967-b951-42e2-a1b2-87f796f4ae48\") " Feb 02 11:23:03 crc kubenswrapper[4925]: I0202 11:23:03.121068 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c662967-b951-42e2-a1b2-87f796f4ae48-utilities" (OuterVolumeSpecName: "utilities") pod "1c662967-b951-42e2-a1b2-87f796f4ae48" (UID: "1c662967-b951-42e2-a1b2-87f796f4ae48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:23:03 crc kubenswrapper[4925]: I0202 11:23:03.141991 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c662967-b951-42e2-a1b2-87f796f4ae48-kube-api-access-mtdvg" (OuterVolumeSpecName: "kube-api-access-mtdvg") pod "1c662967-b951-42e2-a1b2-87f796f4ae48" (UID: "1c662967-b951-42e2-a1b2-87f796f4ae48"). InnerVolumeSpecName "kube-api-access-mtdvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:23:03 crc kubenswrapper[4925]: I0202 11:23:03.144717 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c662967-b951-42e2-a1b2-87f796f4ae48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c662967-b951-42e2-a1b2-87f796f4ae48" (UID: "1c662967-b951-42e2-a1b2-87f796f4ae48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:23:03 crc kubenswrapper[4925]: I0202 11:23:03.220898 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c662967-b951-42e2-a1b2-87f796f4ae48-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:23:03 crc kubenswrapper[4925]: I0202 11:23:03.220929 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c662967-b951-42e2-a1b2-87f796f4ae48-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:23:03 crc kubenswrapper[4925]: I0202 11:23:03.220941 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtdvg\" (UniqueName: \"kubernetes.io/projected/1c662967-b951-42e2-a1b2-87f796f4ae48-kube-api-access-mtdvg\") on node \"crc\" DevicePath \"\"" Feb 02 11:23:03 crc kubenswrapper[4925]: I0202 11:23:03.490844 4925 generic.go:334] "Generic (PLEG): container finished" podID="1c662967-b951-42e2-a1b2-87f796f4ae48" containerID="fb9eeec2bf240085234a47bcff1756c95729f442531a1a472d406347d7b4efb9" exitCode=0 Feb 02 11:23:03 crc kubenswrapper[4925]: I0202 11:23:03.490883 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74xg6" event={"ID":"1c662967-b951-42e2-a1b2-87f796f4ae48","Type":"ContainerDied","Data":"fb9eeec2bf240085234a47bcff1756c95729f442531a1a472d406347d7b4efb9"} Feb 02 11:23:03 crc kubenswrapper[4925]: I0202 11:23:03.490905 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74xg6" event={"ID":"1c662967-b951-42e2-a1b2-87f796f4ae48","Type":"ContainerDied","Data":"d83754918e723366a6f8e092a43a2d21d4dffcf0a4061704c0a0eb88b932c14b"} Feb 02 11:23:03 crc kubenswrapper[4925]: I0202 11:23:03.490917 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74xg6" Feb 02 11:23:03 crc kubenswrapper[4925]: I0202 11:23:03.490922 4925 scope.go:117] "RemoveContainer" containerID="fb9eeec2bf240085234a47bcff1756c95729f442531a1a472d406347d7b4efb9" Feb 02 11:23:03 crc kubenswrapper[4925]: I0202 11:23:03.536824 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74xg6"] Feb 02 11:23:03 crc kubenswrapper[4925]: I0202 11:23:03.545943 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-74xg6"] Feb 02 11:23:04 crc kubenswrapper[4925]: I0202 11:23:04.674244 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c662967-b951-42e2-a1b2-87f796f4ae48" path="/var/lib/kubelet/pods/1c662967-b951-42e2-a1b2-87f796f4ae48/volumes" Feb 02 11:23:08 crc kubenswrapper[4925]: I0202 11:23:08.579363 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:23:08 crc kubenswrapper[4925]: I0202 11:23:08.642873 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-tn88d"] Feb 02 11:23:08 crc kubenswrapper[4925]: I0202 11:23:08.644526 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-tn88d" podUID="03aa6e1e-8e44-45b8-9802-10f7e388c390" containerName="dnsmasq-dns" containerID="cri-o://afaf5a581f1b9d5c056a00246aa5b30c61849fdfe29470bf475fc6a4683729c7" gracePeriod=10 Feb 02 11:23:08 crc kubenswrapper[4925]: I0202 11:23:08.796186 4925 scope.go:117] "RemoveContainer" containerID="66907cab01f27aaae5698018a4e8e2e755e8d5a20c313c131625c982897e9779" Feb 02 11:23:08 crc kubenswrapper[4925]: I0202 11:23:08.923113 4925 scope.go:117] "RemoveContainer" containerID="c8a71c41257e5406183ae55f54a8c1c4508d28e9ac51bc8a9e6769e9d6e8f0a0" Feb 02 11:23:08 crc kubenswrapper[4925]: I0202 11:23:08.926672 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:23:08 crc kubenswrapper[4925]: I0202 11:23:08.958540 4925 scope.go:117] "RemoveContainer" containerID="fb9eeec2bf240085234a47bcff1756c95729f442531a1a472d406347d7b4efb9" Feb 02 11:23:08 crc kubenswrapper[4925]: E0202 11:23:08.958910 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb9eeec2bf240085234a47bcff1756c95729f442531a1a472d406347d7b4efb9\": container with ID starting with fb9eeec2bf240085234a47bcff1756c95729f442531a1a472d406347d7b4efb9 not found: ID does not exist" containerID="fb9eeec2bf240085234a47bcff1756c95729f442531a1a472d406347d7b4efb9" Feb 02 11:23:08 crc kubenswrapper[4925]: I0202 11:23:08.958951 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb9eeec2bf240085234a47bcff1756c95729f442531a1a472d406347d7b4efb9"} err="failed to get container status \"fb9eeec2bf240085234a47bcff1756c95729f442531a1a472d406347d7b4efb9\": rpc error: code = NotFound desc = could not find container \"fb9eeec2bf240085234a47bcff1756c95729f442531a1a472d406347d7b4efb9\": container with ID starting with fb9eeec2bf240085234a47bcff1756c95729f442531a1a472d406347d7b4efb9 not found: ID does not exist" Feb 02 11:23:08 crc kubenswrapper[4925]: I0202 11:23:08.958977 4925 scope.go:117] "RemoveContainer" containerID="66907cab01f27aaae5698018a4e8e2e755e8d5a20c313c131625c982897e9779" Feb 02 11:23:08 crc kubenswrapper[4925]: E0202 11:23:08.959267 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66907cab01f27aaae5698018a4e8e2e755e8d5a20c313c131625c982897e9779\": container with ID starting with 66907cab01f27aaae5698018a4e8e2e755e8d5a20c313c131625c982897e9779 not found: ID does not exist" containerID="66907cab01f27aaae5698018a4e8e2e755e8d5a20c313c131625c982897e9779" Feb 02 11:23:08 crc kubenswrapper[4925]: I0202 11:23:08.959292 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66907cab01f27aaae5698018a4e8e2e755e8d5a20c313c131625c982897e9779"} err="failed to get container status \"66907cab01f27aaae5698018a4e8e2e755e8d5a20c313c131625c982897e9779\": rpc error: code = NotFound desc = could not find container \"66907cab01f27aaae5698018a4e8e2e755e8d5a20c313c131625c982897e9779\": container with ID starting with 66907cab01f27aaae5698018a4e8e2e755e8d5a20c313c131625c982897e9779 not found: ID does not exist" Feb 02 11:23:08 crc kubenswrapper[4925]: I0202 11:23:08.959307 4925 scope.go:117] "RemoveContainer" containerID="c8a71c41257e5406183ae55f54a8c1c4508d28e9ac51bc8a9e6769e9d6e8f0a0" Feb 02 11:23:08 crc kubenswrapper[4925]: E0202 11:23:08.959568 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8a71c41257e5406183ae55f54a8c1c4508d28e9ac51bc8a9e6769e9d6e8f0a0\": container with ID starting with c8a71c41257e5406183ae55f54a8c1c4508d28e9ac51bc8a9e6769e9d6e8f0a0 not found: ID does not exist" containerID="c8a71c41257e5406183ae55f54a8c1c4508d28e9ac51bc8a9e6769e9d6e8f0a0" Feb 02 11:23:08 crc kubenswrapper[4925]: I0202 11:23:08.959633 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8a71c41257e5406183ae55f54a8c1c4508d28e9ac51bc8a9e6769e9d6e8f0a0"} err="failed to get container status \"c8a71c41257e5406183ae55f54a8c1c4508d28e9ac51bc8a9e6769e9d6e8f0a0\": rpc error: code = NotFound desc = could not find container \"c8a71c41257e5406183ae55f54a8c1c4508d28e9ac51bc8a9e6769e9d6e8f0a0\": container with ID starting with c8a71c41257e5406183ae55f54a8c1c4508d28e9ac51bc8a9e6769e9d6e8f0a0 not found: ID does not exist" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.128383 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.274134 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-dns-svc\") pod \"03aa6e1e-8e44-45b8-9802-10f7e388c390\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.274596 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-ovsdbserver-sb\") pod \"03aa6e1e-8e44-45b8-9802-10f7e388c390\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.274637 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-config\") pod \"03aa6e1e-8e44-45b8-9802-10f7e388c390\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.274731 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-ovsdbserver-nb\") pod \"03aa6e1e-8e44-45b8-9802-10f7e388c390\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.274755 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ngmq\" (UniqueName: \"kubernetes.io/projected/03aa6e1e-8e44-45b8-9802-10f7e388c390-kube-api-access-2ngmq\") pod \"03aa6e1e-8e44-45b8-9802-10f7e388c390\" (UID: \"03aa6e1e-8e44-45b8-9802-10f7e388c390\") " Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.281439 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03aa6e1e-8e44-45b8-9802-10f7e388c390-kube-api-access-2ngmq" (OuterVolumeSpecName: "kube-api-access-2ngmq") pod "03aa6e1e-8e44-45b8-9802-10f7e388c390" (UID: "03aa6e1e-8e44-45b8-9802-10f7e388c390"). InnerVolumeSpecName "kube-api-access-2ngmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.327007 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "03aa6e1e-8e44-45b8-9802-10f7e388c390" (UID: "03aa6e1e-8e44-45b8-9802-10f7e388c390"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.338259 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-config" (OuterVolumeSpecName: "config") pod "03aa6e1e-8e44-45b8-9802-10f7e388c390" (UID: "03aa6e1e-8e44-45b8-9802-10f7e388c390"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.338252 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "03aa6e1e-8e44-45b8-9802-10f7e388c390" (UID: "03aa6e1e-8e44-45b8-9802-10f7e388c390"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.351595 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "03aa6e1e-8e44-45b8-9802-10f7e388c390" (UID: "03aa6e1e-8e44-45b8-9802-10f7e388c390"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.377315 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.377359 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.377374 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.377387 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ngmq\" (UniqueName: \"kubernetes.io/projected/03aa6e1e-8e44-45b8-9802-10f7e388c390-kube-api-access-2ngmq\") on node \"crc\" DevicePath \"\"" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.377399 4925 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03aa6e1e-8e44-45b8-9802-10f7e388c390-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.568667 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" event={"ID":"fad6a60e-a28e-4942-8fa2-6cceb8e1b146","Type":"ContainerStarted","Data":"8f809de8a2f1bff36f015f9f78a2d29ea2e088cd9d0c6e5d2f691a420fa189f7"} Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.570407 4925 generic.go:334] "Generic (PLEG): container finished" podID="03aa6e1e-8e44-45b8-9802-10f7e388c390" containerID="afaf5a581f1b9d5c056a00246aa5b30c61849fdfe29470bf475fc6a4683729c7" exitCode=0 Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.570480 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-tn88d" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.570501 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-tn88d" event={"ID":"03aa6e1e-8e44-45b8-9802-10f7e388c390","Type":"ContainerDied","Data":"afaf5a581f1b9d5c056a00246aa5b30c61849fdfe29470bf475fc6a4683729c7"} Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.570547 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-tn88d" event={"ID":"03aa6e1e-8e44-45b8-9802-10f7e388c390","Type":"ContainerDied","Data":"7ef58a5cc45861b752b5dc774d1df17a32a58da8d39c981c3e7adde1a603aea8"} Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.570573 4925 scope.go:117] "RemoveContainer" containerID="afaf5a581f1b9d5c056a00246aa5b30c61849fdfe29470bf475fc6a4683729c7" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.591591 4925 scope.go:117] "RemoveContainer" containerID="9ec3ef4aca76c1be5df1b2741645d332e7ac899c70566f810fd27e5698eaa896" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.612945 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" podStartSLOduration=1.9004456969999999 podStartE2EDuration="10.612925619s" podCreationTimestamp="2026-02-02 11:22:59 +0000 UTC" firstStartedPulling="2026-02-02 11:23:00.211787271 +0000 UTC m=+1557.216036233" lastFinishedPulling="2026-02-02 11:23:08.924267193 +0000 UTC m=+1565.928516155" observedRunningTime="2026-02-02 11:23:09.59071206 +0000 UTC m=+1566.594961022" watchObservedRunningTime="2026-02-02 11:23:09.612925619 +0000 UTC m=+1566.617174581" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.613121 4925 scope.go:117] "RemoveContainer" containerID="afaf5a581f1b9d5c056a00246aa5b30c61849fdfe29470bf475fc6a4683729c7" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.613797 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-tn88d"] Feb 02 11:23:09 crc kubenswrapper[4925]: E0202 11:23:09.614599 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afaf5a581f1b9d5c056a00246aa5b30c61849fdfe29470bf475fc6a4683729c7\": container with ID starting with afaf5a581f1b9d5c056a00246aa5b30c61849fdfe29470bf475fc6a4683729c7 not found: ID does not exist" containerID="afaf5a581f1b9d5c056a00246aa5b30c61849fdfe29470bf475fc6a4683729c7" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.614645 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afaf5a581f1b9d5c056a00246aa5b30c61849fdfe29470bf475fc6a4683729c7"} err="failed to get container status \"afaf5a581f1b9d5c056a00246aa5b30c61849fdfe29470bf475fc6a4683729c7\": rpc error: code = NotFound desc = could not find container \"afaf5a581f1b9d5c056a00246aa5b30c61849fdfe29470bf475fc6a4683729c7\": container with ID starting with afaf5a581f1b9d5c056a00246aa5b30c61849fdfe29470bf475fc6a4683729c7 not found: ID does not exist" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.614671 4925 scope.go:117] "RemoveContainer" containerID="9ec3ef4aca76c1be5df1b2741645d332e7ac899c70566f810fd27e5698eaa896" Feb 02 11:23:09 crc kubenswrapper[4925]: E0202 11:23:09.615217 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ec3ef4aca76c1be5df1b2741645d332e7ac899c70566f810fd27e5698eaa896\": container with ID starting with 9ec3ef4aca76c1be5df1b2741645d332e7ac899c70566f810fd27e5698eaa896 not found: ID does not exist" containerID="9ec3ef4aca76c1be5df1b2741645d332e7ac899c70566f810fd27e5698eaa896" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.615258 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec3ef4aca76c1be5df1b2741645d332e7ac899c70566f810fd27e5698eaa896"} err="failed to get container status \"9ec3ef4aca76c1be5df1b2741645d332e7ac899c70566f810fd27e5698eaa896\": rpc error: code = NotFound desc = could not find container \"9ec3ef4aca76c1be5df1b2741645d332e7ac899c70566f810fd27e5698eaa896\": container with ID starting with 9ec3ef4aca76c1be5df1b2741645d332e7ac899c70566f810fd27e5698eaa896 not found: ID does not exist" Feb 02 11:23:09 crc kubenswrapper[4925]: I0202 11:23:09.621525 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-tn88d"] Feb 02 11:23:10 crc kubenswrapper[4925]: I0202 11:23:10.675832 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03aa6e1e-8e44-45b8-9802-10f7e388c390" path="/var/lib/kubelet/pods/03aa6e1e-8e44-45b8-9802-10f7e388c390/volumes" Feb 02 11:23:13 crc kubenswrapper[4925]: I0202 11:23:13.398313 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:23:13 crc kubenswrapper[4925]: I0202 11:23:13.398632 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:23:14 crc kubenswrapper[4925]: I0202 11:23:14.081490 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b856c5697-tn88d" podUID="03aa6e1e-8e44-45b8-9802-10f7e388c390" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.182:5353: i/o timeout" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.218655 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q572l"] Feb 02 11:23:19 crc kubenswrapper[4925]: E0202 11:23:19.219458 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c662967-b951-42e2-a1b2-87f796f4ae48" containerName="extract-content" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.219474 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c662967-b951-42e2-a1b2-87f796f4ae48" containerName="extract-content" Feb 02 11:23:19 crc kubenswrapper[4925]: E0202 11:23:19.219495 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c662967-b951-42e2-a1b2-87f796f4ae48" containerName="registry-server" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.219501 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c662967-b951-42e2-a1b2-87f796f4ae48" containerName="registry-server" Feb 02 11:23:19 crc kubenswrapper[4925]: E0202 11:23:19.219512 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03aa6e1e-8e44-45b8-9802-10f7e388c390" containerName="dnsmasq-dns" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.219518 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="03aa6e1e-8e44-45b8-9802-10f7e388c390" containerName="dnsmasq-dns" Feb 02 11:23:19 crc kubenswrapper[4925]: E0202 11:23:19.219530 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c662967-b951-42e2-a1b2-87f796f4ae48" containerName="extract-utilities" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.219535 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c662967-b951-42e2-a1b2-87f796f4ae48" containerName="extract-utilities" Feb 02 11:23:19 crc kubenswrapper[4925]: E0202 11:23:19.219556 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03aa6e1e-8e44-45b8-9802-10f7e388c390" containerName="init" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.219564 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="03aa6e1e-8e44-45b8-9802-10f7e388c390" containerName="init" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.219746 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="03aa6e1e-8e44-45b8-9802-10f7e388c390" containerName="dnsmasq-dns" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.219759 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c662967-b951-42e2-a1b2-87f796f4ae48" containerName="registry-server" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.221849 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q572l" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.229227 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q572l"] Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.281193 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-catalog-content\") pod \"community-operators-q572l\" (UID: \"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf\") " pod="openshift-marketplace/community-operators-q572l" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.281334 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rjdr\" (UniqueName: \"kubernetes.io/projected/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-kube-api-access-9rjdr\") pod \"community-operators-q572l\" (UID: \"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf\") " pod="openshift-marketplace/community-operators-q572l" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.281472 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-utilities\") pod \"community-operators-q572l\" (UID: \"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf\") " pod="openshift-marketplace/community-operators-q572l" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.383563 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rjdr\" (UniqueName: \"kubernetes.io/projected/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-kube-api-access-9rjdr\") pod \"community-operators-q572l\" (UID: \"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf\") " pod="openshift-marketplace/community-operators-q572l" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.384008 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-utilities\") pod \"community-operators-q572l\" (UID: \"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf\") " pod="openshift-marketplace/community-operators-q572l" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.384264 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-catalog-content\") pod \"community-operators-q572l\" (UID: \"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf\") " pod="openshift-marketplace/community-operators-q572l" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.384718 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-utilities\") pod \"community-operators-q572l\" (UID: \"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf\") " pod="openshift-marketplace/community-operators-q572l" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.384907 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-catalog-content\") pod \"community-operators-q572l\" (UID: \"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf\") " pod="openshift-marketplace/community-operators-q572l" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.408017 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rjdr\" (UniqueName: \"kubernetes.io/projected/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-kube-api-access-9rjdr\") pod \"community-operators-q572l\" (UID: \"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf\") " pod="openshift-marketplace/community-operators-q572l" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.547724 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q572l" Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.666925 4925 generic.go:334] "Generic (PLEG): container finished" podID="fad6a60e-a28e-4942-8fa2-6cceb8e1b146" containerID="8f809de8a2f1bff36f015f9f78a2d29ea2e088cd9d0c6e5d2f691a420fa189f7" exitCode=0 Feb 02 11:23:19 crc kubenswrapper[4925]: I0202 11:23:19.666998 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" event={"ID":"fad6a60e-a28e-4942-8fa2-6cceb8e1b146","Type":"ContainerDied","Data":"8f809de8a2f1bff36f015f9f78a2d29ea2e088cd9d0c6e5d2f691a420fa189f7"} Feb 02 11:23:20 crc kubenswrapper[4925]: W0202 11:23:20.142532 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90ab9e2b_58a3_4efa_b3c5_5538c3de5edf.slice/crio-46bfd912684fd858d3b108e10330330889e41f5c2d9071733a580eae5b981622 WatchSource:0}: Error finding container 46bfd912684fd858d3b108e10330330889e41f5c2d9071733a580eae5b981622: Status 404 returned error can't find the container with id 46bfd912684fd858d3b108e10330330889e41f5c2d9071733a580eae5b981622 Feb 02 11:23:20 crc kubenswrapper[4925]: I0202 11:23:20.144522 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q572l"] Feb 02 11:23:20 crc kubenswrapper[4925]: I0202 11:23:20.679981 4925 generic.go:334] "Generic (PLEG): container finished" podID="90ab9e2b-58a3-4efa-b3c5-5538c3de5edf" containerID="d7c8aecd71c87aaafe383d15fc8dd7987f2fd4e5975a0efe7b5b27dfad2ab2a8" exitCode=0 Feb 02 11:23:20 crc kubenswrapper[4925]: I0202 11:23:20.680202 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q572l" event={"ID":"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf","Type":"ContainerDied","Data":"d7c8aecd71c87aaafe383d15fc8dd7987f2fd4e5975a0efe7b5b27dfad2ab2a8"} Feb 02 11:23:20 crc kubenswrapper[4925]: I0202 11:23:20.680460 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q572l" event={"ID":"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf","Type":"ContainerStarted","Data":"46bfd912684fd858d3b108e10330330889e41f5c2d9071733a580eae5b981622"} Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.122211 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.222202 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k44pq\" (UniqueName: \"kubernetes.io/projected/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-kube-api-access-k44pq\") pod \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\" (UID: \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\") " Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.222311 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-inventory\") pod \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\" (UID: \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\") " Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.222373 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-repo-setup-combined-ca-bundle\") pod \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\" (UID: \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\") " Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.222511 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-ssh-key-openstack-edpm-ipam\") pod \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\" (UID: \"fad6a60e-a28e-4942-8fa2-6cceb8e1b146\") " Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.228996 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "fad6a60e-a28e-4942-8fa2-6cceb8e1b146" (UID: "fad6a60e-a28e-4942-8fa2-6cceb8e1b146"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.229130 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-kube-api-access-k44pq" (OuterVolumeSpecName: "kube-api-access-k44pq") pod "fad6a60e-a28e-4942-8fa2-6cceb8e1b146" (UID: "fad6a60e-a28e-4942-8fa2-6cceb8e1b146"). InnerVolumeSpecName "kube-api-access-k44pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.250407 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fad6a60e-a28e-4942-8fa2-6cceb8e1b146" (UID: "fad6a60e-a28e-4942-8fa2-6cceb8e1b146"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.252490 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-inventory" (OuterVolumeSpecName: "inventory") pod "fad6a60e-a28e-4942-8fa2-6cceb8e1b146" (UID: "fad6a60e-a28e-4942-8fa2-6cceb8e1b146"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.326024 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.326106 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k44pq\" (UniqueName: \"kubernetes.io/projected/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-kube-api-access-k44pq\") on node \"crc\" DevicePath \"\"" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.326119 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.326129 4925 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad6a60e-a28e-4942-8fa2-6cceb8e1b146-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.692065 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" event={"ID":"fad6a60e-a28e-4942-8fa2-6cceb8e1b146","Type":"ContainerDied","Data":"be5743b81ec9581b08261e22ad9db10fcb9be83558fb168175510dce5ba97c5a"} Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.692411 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be5743b81ec9581b08261e22ad9db10fcb9be83558fb168175510dce5ba97c5a" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.692146 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.695061 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q572l" event={"ID":"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf","Type":"ContainerStarted","Data":"e2672127126f40b069cb3d2b1f4e396a63a7575ddcdda4f136858add3e65a471"} Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.765666 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg"] Feb 02 11:23:21 crc kubenswrapper[4925]: E0202 11:23:21.766026 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad6a60e-a28e-4942-8fa2-6cceb8e1b146" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.766044 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad6a60e-a28e-4942-8fa2-6cceb8e1b146" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.766232 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad6a60e-a28e-4942-8fa2-6cceb8e1b146" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.766791 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.769877 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.769979 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.770239 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.770748 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.783662 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg"] Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.836023 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twffg\" (UID: \"e9f63e74-c179-41bc-a05e-8a374a9710b7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.836193 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7brt6\" (UniqueName: \"kubernetes.io/projected/e9f63e74-c179-41bc-a05e-8a374a9710b7-kube-api-access-7brt6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twffg\" (UID: \"e9f63e74-c179-41bc-a05e-8a374a9710b7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.836284 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twffg\" (UID: \"e9f63e74-c179-41bc-a05e-8a374a9710b7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.836326 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twffg\" (UID: \"e9f63e74-c179-41bc-a05e-8a374a9710b7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.937953 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7brt6\" (UniqueName: \"kubernetes.io/projected/e9f63e74-c179-41bc-a05e-8a374a9710b7-kube-api-access-7brt6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twffg\" (UID: \"e9f63e74-c179-41bc-a05e-8a374a9710b7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.938070 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twffg\" (UID: \"e9f63e74-c179-41bc-a05e-8a374a9710b7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.938132 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twffg\" (UID: \"e9f63e74-c179-41bc-a05e-8a374a9710b7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.938199 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twffg\" (UID: \"e9f63e74-c179-41bc-a05e-8a374a9710b7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.942330 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twffg\" (UID: \"e9f63e74-c179-41bc-a05e-8a374a9710b7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.942707 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twffg\" (UID: \"e9f63e74-c179-41bc-a05e-8a374a9710b7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.942736 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twffg\" (UID: \"e9f63e74-c179-41bc-a05e-8a374a9710b7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" Feb 02 11:23:21 crc kubenswrapper[4925]: I0202 11:23:21.968451 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7brt6\" (UniqueName: \"kubernetes.io/projected/e9f63e74-c179-41bc-a05e-8a374a9710b7-kube-api-access-7brt6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twffg\" (UID: \"e9f63e74-c179-41bc-a05e-8a374a9710b7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" Feb 02 11:23:22 crc kubenswrapper[4925]: I0202 11:23:22.116652 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" Feb 02 11:23:22 crc kubenswrapper[4925]: I0202 11:23:22.648914 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg"] Feb 02 11:23:22 crc kubenswrapper[4925]: W0202 11:23:22.655907 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9f63e74_c179_41bc_a05e_8a374a9710b7.slice/crio-96f5e7fe39e0fc16463018d01e680243ec6b764359285fb5d752fa3d1f1484f4 WatchSource:0}: Error finding container 96f5e7fe39e0fc16463018d01e680243ec6b764359285fb5d752fa3d1f1484f4: Status 404 returned error can't find the container with id 96f5e7fe39e0fc16463018d01e680243ec6b764359285fb5d752fa3d1f1484f4 Feb 02 11:23:22 crc kubenswrapper[4925]: I0202 11:23:22.705408 4925 generic.go:334] "Generic (PLEG): container finished" podID="90ab9e2b-58a3-4efa-b3c5-5538c3de5edf" containerID="e2672127126f40b069cb3d2b1f4e396a63a7575ddcdda4f136858add3e65a471" exitCode=0 Feb 02 11:23:22 crc kubenswrapper[4925]: I0202 11:23:22.705501 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q572l" event={"ID":"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf","Type":"ContainerDied","Data":"e2672127126f40b069cb3d2b1f4e396a63a7575ddcdda4f136858add3e65a471"} Feb 02 11:23:22 crc kubenswrapper[4925]: I0202 11:23:22.708720 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" event={"ID":"e9f63e74-c179-41bc-a05e-8a374a9710b7","Type":"ContainerStarted","Data":"96f5e7fe39e0fc16463018d01e680243ec6b764359285fb5d752fa3d1f1484f4"} Feb 02 11:23:23 crc kubenswrapper[4925]: I0202 11:23:23.716659 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" event={"ID":"e9f63e74-c179-41bc-a05e-8a374a9710b7","Type":"ContainerStarted","Data":"ca1a9e29b145438a862ed167e666138a4d2e16822ca661995ed7edd10a07469e"} Feb 02 11:23:23 crc kubenswrapper[4925]: I0202 11:23:23.720568 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q572l" event={"ID":"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf","Type":"ContainerStarted","Data":"3342695035bd17b14fb49793bf116d5ed4b4d35bb92acd475ffe77f42c344d63"} Feb 02 11:23:23 crc kubenswrapper[4925]: I0202 11:23:23.746624 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" podStartSLOduration=2.163257913 podStartE2EDuration="2.746597947s" podCreationTimestamp="2026-02-02 11:23:21 +0000 UTC" firstStartedPulling="2026-02-02 11:23:22.666626446 +0000 UTC m=+1579.670875418" lastFinishedPulling="2026-02-02 11:23:23.24996648 +0000 UTC m=+1580.254215452" observedRunningTime="2026-02-02 11:23:23.735989481 +0000 UTC m=+1580.740238463" watchObservedRunningTime="2026-02-02 11:23:23.746597947 +0000 UTC m=+1580.750846919" Feb 02 11:23:23 crc kubenswrapper[4925]: I0202 11:23:23.759022 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q572l" podStartSLOduration=2.189339267 podStartE2EDuration="4.759000481s" podCreationTimestamp="2026-02-02 11:23:19 +0000 UTC" firstStartedPulling="2026-02-02 11:23:20.681671023 +0000 UTC m=+1577.685919985" lastFinishedPulling="2026-02-02 11:23:23.251332237 +0000 UTC m=+1580.255581199" observedRunningTime="2026-02-02 11:23:23.752527427 +0000 UTC m=+1580.756776409" watchObservedRunningTime="2026-02-02 11:23:23.759000481 +0000 UTC m=+1580.763249443" Feb 02 11:23:26 crc kubenswrapper[4925]: I0202 11:23:26.696306 4925 scope.go:117] "RemoveContainer" containerID="72a8c11f472ce3a902ef45f68ceac5feda305e18822dbcaf4665559f07ebfb46" Feb 02 11:23:26 crc kubenswrapper[4925]: I0202 11:23:26.730636 4925 scope.go:117] "RemoveContainer" containerID="d602cb425a29481929a73de6f2f934a7c5603c9fcbe7e82aa6fd14f447cac08e" Feb 02 11:23:26 crc kubenswrapper[4925]: I0202 11:23:26.767958 4925 scope.go:117] "RemoveContainer" containerID="7fd372f8ba4e79c3fe103d832c23ac29312a39d94769378b7c350666a1d37cb6" Feb 02 11:23:26 crc kubenswrapper[4925]: I0202 11:23:26.796856 4925 scope.go:117] "RemoveContainer" containerID="81da7f4ccdacbf4428d202fbada41b16d19f6f1e62ef3cc72a5c7901f7c4b063" Feb 02 11:23:29 crc kubenswrapper[4925]: I0202 11:23:29.548869 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q572l" Feb 02 11:23:29 crc kubenswrapper[4925]: I0202 11:23:29.549322 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q572l" Feb 02 11:23:29 crc kubenswrapper[4925]: I0202 11:23:29.599092 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q572l" Feb 02 11:23:29 crc kubenswrapper[4925]: I0202 11:23:29.808069 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q572l" Feb 02 11:23:29 crc kubenswrapper[4925]: I0202 11:23:29.856792 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q572l"] Feb 02 11:23:30 crc kubenswrapper[4925]: E0202 11:23:30.578642 4925 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf988cc52_4086_4387_971c_ecd4837c512c.slice/crio-conmon-6a4579ef1be6a1352d4c64fa63191ac9e3fd2798017cfb25ac4f7bd46e52e103.scope\": RecentStats: unable to find data in memory cache]" Feb 02 11:23:30 crc kubenswrapper[4925]: I0202 11:23:30.777646 4925 generic.go:334] "Generic (PLEG): container finished" podID="f988cc52-4086-4387-971c-ecd4837c512c" containerID="6a4579ef1be6a1352d4c64fa63191ac9e3fd2798017cfb25ac4f7bd46e52e103" exitCode=0 Feb 02 11:23:30 crc kubenswrapper[4925]: I0202 11:23:30.777716 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f988cc52-4086-4387-971c-ecd4837c512c","Type":"ContainerDied","Data":"6a4579ef1be6a1352d4c64fa63191ac9e3fd2798017cfb25ac4f7bd46e52e103"} Feb 02 11:23:31 crc kubenswrapper[4925]: I0202 11:23:31.788008 4925 generic.go:334] "Generic (PLEG): container finished" podID="f584c201-5eae-46d6-a9c1-b360f5506d24" containerID="1a12b4ae7e939c87785e9ea0cd7c7fe4b926259d3aff12ef4c883453313856e5" exitCode=0 Feb 02 11:23:31 crc kubenswrapper[4925]: I0202 11:23:31.788113 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f584c201-5eae-46d6-a9c1-b360f5506d24","Type":"ContainerDied","Data":"1a12b4ae7e939c87785e9ea0cd7c7fe4b926259d3aff12ef4c883453313856e5"} Feb 02 11:23:31 crc kubenswrapper[4925]: I0202 11:23:31.790865 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q572l" podUID="90ab9e2b-58a3-4efa-b3c5-5538c3de5edf" containerName="registry-server" containerID="cri-o://3342695035bd17b14fb49793bf116d5ed4b4d35bb92acd475ffe77f42c344d63" gracePeriod=2 Feb 02 11:23:31 crc kubenswrapper[4925]: I0202 11:23:31.790958 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f988cc52-4086-4387-971c-ecd4837c512c","Type":"ContainerStarted","Data":"662b09f0593369a1db654d682b3e2e011390c1f32d731f0442adbdf82474a152"} Feb 02 11:23:31 crc kubenswrapper[4925]: I0202 11:23:31.791823 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 11:23:31 crc kubenswrapper[4925]: I0202 11:23:31.847746 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.847726399 podStartE2EDuration="36.847726399s" podCreationTimestamp="2026-02-02 11:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:23:31.83478301 +0000 UTC m=+1588.839031992" watchObservedRunningTime="2026-02-02 11:23:31.847726399 +0000 UTC m=+1588.851975351" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.274102 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q572l" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.340409 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-utilities\") pod \"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf\" (UID: \"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf\") " Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.340563 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rjdr\" (UniqueName: \"kubernetes.io/projected/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-kube-api-access-9rjdr\") pod \"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf\" (UID: \"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf\") " Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.340665 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-catalog-content\") pod \"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf\" (UID: \"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf\") " Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.341513 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-utilities" (OuterVolumeSpecName: "utilities") pod "90ab9e2b-58a3-4efa-b3c5-5538c3de5edf" (UID: "90ab9e2b-58a3-4efa-b3c5-5538c3de5edf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.346287 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-kube-api-access-9rjdr" (OuterVolumeSpecName: "kube-api-access-9rjdr") pod "90ab9e2b-58a3-4efa-b3c5-5538c3de5edf" (UID: "90ab9e2b-58a3-4efa-b3c5-5538c3de5edf"). InnerVolumeSpecName "kube-api-access-9rjdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.392257 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90ab9e2b-58a3-4efa-b3c5-5538c3de5edf" (UID: "90ab9e2b-58a3-4efa-b3c5-5538c3de5edf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.443295 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.443334 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rjdr\" (UniqueName: \"kubernetes.io/projected/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-kube-api-access-9rjdr\") on node \"crc\" DevicePath \"\"" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.443346 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.799695 4925 generic.go:334] "Generic (PLEG): container finished" podID="90ab9e2b-58a3-4efa-b3c5-5538c3de5edf" containerID="3342695035bd17b14fb49793bf116d5ed4b4d35bb92acd475ffe77f42c344d63" exitCode=0 Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.800160 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q572l" event={"ID":"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf","Type":"ContainerDied","Data":"3342695035bd17b14fb49793bf116d5ed4b4d35bb92acd475ffe77f42c344d63"} Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.800197 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q572l" event={"ID":"90ab9e2b-58a3-4efa-b3c5-5538c3de5edf","Type":"ContainerDied","Data":"46bfd912684fd858d3b108e10330330889e41f5c2d9071733a580eae5b981622"} Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.800214 4925 scope.go:117] "RemoveContainer" containerID="3342695035bd17b14fb49793bf116d5ed4b4d35bb92acd475ffe77f42c344d63" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.800210 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q572l" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.802580 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f584c201-5eae-46d6-a9c1-b360f5506d24","Type":"ContainerStarted","Data":"e4ce29673465c4cf17ac7adf8742edda1a651eaa4d02b415d3ba4ae1c8ddc587"} Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.836228 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.836206433 podStartE2EDuration="36.836206433s" podCreationTimestamp="2026-02-02 11:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:23:32.824470366 +0000 UTC m=+1589.828719348" watchObservedRunningTime="2026-02-02 11:23:32.836206433 +0000 UTC m=+1589.840455405" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.848280 4925 scope.go:117] "RemoveContainer" containerID="e2672127126f40b069cb3d2b1f4e396a63a7575ddcdda4f136858add3e65a471" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.854366 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q572l"] Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.862107 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q572l"] Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.878291 4925 scope.go:117] "RemoveContainer" containerID="d7c8aecd71c87aaafe383d15fc8dd7987f2fd4e5975a0efe7b5b27dfad2ab2a8" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.910024 4925 scope.go:117] "RemoveContainer" containerID="3342695035bd17b14fb49793bf116d5ed4b4d35bb92acd475ffe77f42c344d63" Feb 02 11:23:32 crc kubenswrapper[4925]: E0202 11:23:32.910598 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3342695035bd17b14fb49793bf116d5ed4b4d35bb92acd475ffe77f42c344d63\": container with ID starting with 3342695035bd17b14fb49793bf116d5ed4b4d35bb92acd475ffe77f42c344d63 not found: ID does not exist" containerID="3342695035bd17b14fb49793bf116d5ed4b4d35bb92acd475ffe77f42c344d63" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.910647 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3342695035bd17b14fb49793bf116d5ed4b4d35bb92acd475ffe77f42c344d63"} err="failed to get container status \"3342695035bd17b14fb49793bf116d5ed4b4d35bb92acd475ffe77f42c344d63\": rpc error: code = NotFound desc = could not find container \"3342695035bd17b14fb49793bf116d5ed4b4d35bb92acd475ffe77f42c344d63\": container with ID starting with 3342695035bd17b14fb49793bf116d5ed4b4d35bb92acd475ffe77f42c344d63 not found: ID does not exist" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.910679 4925 scope.go:117] "RemoveContainer" containerID="e2672127126f40b069cb3d2b1f4e396a63a7575ddcdda4f136858add3e65a471" Feb 02 11:23:32 crc kubenswrapper[4925]: E0202 11:23:32.911185 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2672127126f40b069cb3d2b1f4e396a63a7575ddcdda4f136858add3e65a471\": container with ID starting with e2672127126f40b069cb3d2b1f4e396a63a7575ddcdda4f136858add3e65a471 not found: ID does not exist" containerID="e2672127126f40b069cb3d2b1f4e396a63a7575ddcdda4f136858add3e65a471" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.911209 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2672127126f40b069cb3d2b1f4e396a63a7575ddcdda4f136858add3e65a471"} err="failed to get container status \"e2672127126f40b069cb3d2b1f4e396a63a7575ddcdda4f136858add3e65a471\": rpc error: code = NotFound desc = could not find container \"e2672127126f40b069cb3d2b1f4e396a63a7575ddcdda4f136858add3e65a471\": container with ID starting with e2672127126f40b069cb3d2b1f4e396a63a7575ddcdda4f136858add3e65a471 not found: ID does not exist" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.911225 4925 scope.go:117] "RemoveContainer" containerID="d7c8aecd71c87aaafe383d15fc8dd7987f2fd4e5975a0efe7b5b27dfad2ab2a8" Feb 02 11:23:32 crc kubenswrapper[4925]: E0202 11:23:32.911428 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7c8aecd71c87aaafe383d15fc8dd7987f2fd4e5975a0efe7b5b27dfad2ab2a8\": container with ID starting with d7c8aecd71c87aaafe383d15fc8dd7987f2fd4e5975a0efe7b5b27dfad2ab2a8 not found: ID does not exist" containerID="d7c8aecd71c87aaafe383d15fc8dd7987f2fd4e5975a0efe7b5b27dfad2ab2a8" Feb 02 11:23:32 crc kubenswrapper[4925]: I0202 11:23:32.911454 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c8aecd71c87aaafe383d15fc8dd7987f2fd4e5975a0efe7b5b27dfad2ab2a8"} err="failed to get container status \"d7c8aecd71c87aaafe383d15fc8dd7987f2fd4e5975a0efe7b5b27dfad2ab2a8\": rpc error: code = NotFound desc = could not find container \"d7c8aecd71c87aaafe383d15fc8dd7987f2fd4e5975a0efe7b5b27dfad2ab2a8\": container with ID starting with d7c8aecd71c87aaafe383d15fc8dd7987f2fd4e5975a0efe7b5b27dfad2ab2a8 not found: ID does not exist" Feb 02 11:23:34 crc kubenswrapper[4925]: I0202 11:23:34.687930 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90ab9e2b-58a3-4efa-b3c5-5538c3de5edf" path="/var/lib/kubelet/pods/90ab9e2b-58a3-4efa-b3c5-5538c3de5edf/volumes" Feb 02 11:23:36 crc kubenswrapper[4925]: I0202 11:23:36.817669 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:23:43 crc kubenswrapper[4925]: I0202 11:23:43.398795 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:23:43 crc kubenswrapper[4925]: I0202 11:23:43.399536 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:23:46 crc kubenswrapper[4925]: I0202 11:23:46.112870 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 11:23:46 crc kubenswrapper[4925]: I0202 11:23:46.820286 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:24:02 crc kubenswrapper[4925]: I0202 11:24:02.598301 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pw7bb"] Feb 02 11:24:02 crc kubenswrapper[4925]: E0202 11:24:02.599729 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ab9e2b-58a3-4efa-b3c5-5538c3de5edf" containerName="extract-utilities" Feb 02 11:24:02 crc kubenswrapper[4925]: I0202 11:24:02.599759 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ab9e2b-58a3-4efa-b3c5-5538c3de5edf" containerName="extract-utilities" Feb 02 11:24:02 crc kubenswrapper[4925]: E0202 11:24:02.599774 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ab9e2b-58a3-4efa-b3c5-5538c3de5edf" containerName="extract-content" Feb 02 11:24:02 crc kubenswrapper[4925]: I0202 11:24:02.599785 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ab9e2b-58a3-4efa-b3c5-5538c3de5edf" containerName="extract-content" Feb 02 11:24:02 crc kubenswrapper[4925]: E0202 11:24:02.599853 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ab9e2b-58a3-4efa-b3c5-5538c3de5edf" containerName="registry-server" Feb 02 11:24:02 crc kubenswrapper[4925]: I0202 11:24:02.599866 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ab9e2b-58a3-4efa-b3c5-5538c3de5edf" containerName="registry-server" Feb 02 11:24:02 crc kubenswrapper[4925]: I0202 11:24:02.600200 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ab9e2b-58a3-4efa-b3c5-5538c3de5edf" containerName="registry-server" Feb 02 11:24:02 crc kubenswrapper[4925]: I0202 11:24:02.602605 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pw7bb" Feb 02 11:24:02 crc kubenswrapper[4925]: I0202 11:24:02.610595 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pw7bb"] Feb 02 11:24:02 crc kubenswrapper[4925]: I0202 11:24:02.641129 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00ecec7e-d3d8-420d-800d-53b189a55626-catalog-content\") pod \"certified-operators-pw7bb\" (UID: \"00ecec7e-d3d8-420d-800d-53b189a55626\") " pod="openshift-marketplace/certified-operators-pw7bb" Feb 02 11:24:02 crc kubenswrapper[4925]: I0202 11:24:02.641343 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00ecec7e-d3d8-420d-800d-53b189a55626-utilities\") pod \"certified-operators-pw7bb\" (UID: \"00ecec7e-d3d8-420d-800d-53b189a55626\") " pod="openshift-marketplace/certified-operators-pw7bb" Feb 02 11:24:02 crc kubenswrapper[4925]: I0202 11:24:02.641547 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wp9c\" (UniqueName: \"kubernetes.io/projected/00ecec7e-d3d8-420d-800d-53b189a55626-kube-api-access-7wp9c\") pod \"certified-operators-pw7bb\" (UID: \"00ecec7e-d3d8-420d-800d-53b189a55626\") " pod="openshift-marketplace/certified-operators-pw7bb" Feb 02 11:24:02 crc kubenswrapper[4925]: I0202 11:24:02.743256 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wp9c\" (UniqueName: \"kubernetes.io/projected/00ecec7e-d3d8-420d-800d-53b189a55626-kube-api-access-7wp9c\") pod \"certified-operators-pw7bb\" (UID: \"00ecec7e-d3d8-420d-800d-53b189a55626\") " pod="openshift-marketplace/certified-operators-pw7bb" Feb 02 11:24:02 crc kubenswrapper[4925]: I0202 11:24:02.743418 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00ecec7e-d3d8-420d-800d-53b189a55626-catalog-content\") pod \"certified-operators-pw7bb\" (UID: \"00ecec7e-d3d8-420d-800d-53b189a55626\") " pod="openshift-marketplace/certified-operators-pw7bb" Feb 02 11:24:02 crc kubenswrapper[4925]: I0202 11:24:02.743471 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00ecec7e-d3d8-420d-800d-53b189a55626-utilities\") pod \"certified-operators-pw7bb\" (UID: \"00ecec7e-d3d8-420d-800d-53b189a55626\") " pod="openshift-marketplace/certified-operators-pw7bb" Feb 02 11:24:02 crc kubenswrapper[4925]: I0202 11:24:02.743992 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00ecec7e-d3d8-420d-800d-53b189a55626-utilities\") pod \"certified-operators-pw7bb\" (UID: \"00ecec7e-d3d8-420d-800d-53b189a55626\") " pod="openshift-marketplace/certified-operators-pw7bb" Feb 02 11:24:02 crc kubenswrapper[4925]: I0202 11:24:02.744156 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00ecec7e-d3d8-420d-800d-53b189a55626-catalog-content\") pod \"certified-operators-pw7bb\" (UID: \"00ecec7e-d3d8-420d-800d-53b189a55626\") " pod="openshift-marketplace/certified-operators-pw7bb" Feb 02 11:24:02 crc kubenswrapper[4925]: I0202 11:24:02.767907 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wp9c\" (UniqueName: \"kubernetes.io/projected/00ecec7e-d3d8-420d-800d-53b189a55626-kube-api-access-7wp9c\") pod \"certified-operators-pw7bb\" (UID: \"00ecec7e-d3d8-420d-800d-53b189a55626\") " pod="openshift-marketplace/certified-operators-pw7bb" Feb 02 11:24:02 crc kubenswrapper[4925]: I0202 11:24:02.930239 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pw7bb" Feb 02 11:24:03 crc kubenswrapper[4925]: I0202 11:24:03.454279 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pw7bb"] Feb 02 11:24:04 crc kubenswrapper[4925]: I0202 11:24:04.056189 4925 generic.go:334] "Generic (PLEG): container finished" podID="00ecec7e-d3d8-420d-800d-53b189a55626" containerID="f9315a846b1854ce1b321202450410018bf5fd03e966496d6ec16e117a9b1c80" exitCode=0 Feb 02 11:24:04 crc kubenswrapper[4925]: I0202 11:24:04.056223 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pw7bb" event={"ID":"00ecec7e-d3d8-420d-800d-53b189a55626","Type":"ContainerDied","Data":"f9315a846b1854ce1b321202450410018bf5fd03e966496d6ec16e117a9b1c80"} Feb 02 11:24:04 crc kubenswrapper[4925]: I0202 11:24:04.056464 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pw7bb" event={"ID":"00ecec7e-d3d8-420d-800d-53b189a55626","Type":"ContainerStarted","Data":"96f613ed4afe4ca8a9240a4a67e42b5bdd29aaff00d3532d19604eb3337e2fe3"} Feb 02 11:24:04 crc kubenswrapper[4925]: I0202 11:24:04.058449 4925 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:24:06 crc kubenswrapper[4925]: I0202 11:24:06.073498 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pw7bb" event={"ID":"00ecec7e-d3d8-420d-800d-53b189a55626","Type":"ContainerStarted","Data":"a5dcaf02b2e8ed604f605343321be486a5a03fc8d9039d57ee5bb37cb66ff436"} Feb 02 11:24:07 crc kubenswrapper[4925]: I0202 11:24:07.085898 4925 generic.go:334] "Generic (PLEG): container finished" podID="00ecec7e-d3d8-420d-800d-53b189a55626" containerID="a5dcaf02b2e8ed604f605343321be486a5a03fc8d9039d57ee5bb37cb66ff436" exitCode=0 Feb 02 11:24:07 crc kubenswrapper[4925]: I0202 11:24:07.085961 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pw7bb" event={"ID":"00ecec7e-d3d8-420d-800d-53b189a55626","Type":"ContainerDied","Data":"a5dcaf02b2e8ed604f605343321be486a5a03fc8d9039d57ee5bb37cb66ff436"} Feb 02 11:24:08 crc kubenswrapper[4925]: I0202 11:24:08.095540 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pw7bb" event={"ID":"00ecec7e-d3d8-420d-800d-53b189a55626","Type":"ContainerStarted","Data":"d5f8ca43f7267bab2383972ce5008713935aae74bf4b30e53642d3a0de9178eb"} Feb 02 11:24:08 crc kubenswrapper[4925]: I0202 11:24:08.118789 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pw7bb" podStartSLOduration=2.461986256 podStartE2EDuration="6.118585971s" podCreationTimestamp="2026-02-02 11:24:02 +0000 UTC" firstStartedPulling="2026-02-02 11:24:04.058175944 +0000 UTC m=+1621.062424906" lastFinishedPulling="2026-02-02 11:24:07.714775659 +0000 UTC m=+1624.719024621" observedRunningTime="2026-02-02 11:24:08.117976635 +0000 UTC m=+1625.122225597" watchObservedRunningTime="2026-02-02 11:24:08.118585971 +0000 UTC m=+1625.122834943" Feb 02 11:24:12 crc kubenswrapper[4925]: I0202 11:24:12.930620 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pw7bb" Feb 02 11:24:12 crc kubenswrapper[4925]: I0202 11:24:12.932214 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pw7bb" Feb 02 11:24:12 crc kubenswrapper[4925]: I0202 11:24:12.982276 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pw7bb" Feb 02 11:24:13 crc kubenswrapper[4925]: I0202 11:24:13.175673 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pw7bb" Feb 02 11:24:13 crc kubenswrapper[4925]: I0202 11:24:13.226158 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pw7bb"] Feb 02 11:24:13 crc kubenswrapper[4925]: I0202 11:24:13.398589 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:24:13 crc kubenswrapper[4925]: I0202 11:24:13.398916 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:24:13 crc kubenswrapper[4925]: I0202 11:24:13.398956 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 11:24:13 crc kubenswrapper[4925]: I0202 11:24:13.399657 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:24:13 crc kubenswrapper[4925]: I0202 11:24:13.399717 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" gracePeriod=600 Feb 02 11:24:13 crc kubenswrapper[4925]: E0202 11:24:13.519107 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:24:14 crc kubenswrapper[4925]: I0202 11:24:14.144555 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" exitCode=0 Feb 02 11:24:14 crc kubenswrapper[4925]: I0202 11:24:14.145435 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff"} Feb 02 11:24:14 crc kubenswrapper[4925]: I0202 11:24:14.145473 4925 scope.go:117] "RemoveContainer" containerID="66621d3a93bf4a19f7e5b6564542e797798b65c2f056111ba9523d20399b11ef" Feb 02 11:24:14 crc kubenswrapper[4925]: I0202 11:24:14.145864 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:24:14 crc kubenswrapper[4925]: E0202 11:24:14.146061 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:24:15 crc kubenswrapper[4925]: I0202 11:24:15.156216 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pw7bb" podUID="00ecec7e-d3d8-420d-800d-53b189a55626" containerName="registry-server" containerID="cri-o://d5f8ca43f7267bab2383972ce5008713935aae74bf4b30e53642d3a0de9178eb" gracePeriod=2 Feb 02 11:24:15 crc kubenswrapper[4925]: I0202 11:24:15.558499 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pw7bb" Feb 02 11:24:15 crc kubenswrapper[4925]: I0202 11:24:15.687778 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wp9c\" (UniqueName: \"kubernetes.io/projected/00ecec7e-d3d8-420d-800d-53b189a55626-kube-api-access-7wp9c\") pod \"00ecec7e-d3d8-420d-800d-53b189a55626\" (UID: \"00ecec7e-d3d8-420d-800d-53b189a55626\") " Feb 02 11:24:15 crc kubenswrapper[4925]: I0202 11:24:15.687962 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00ecec7e-d3d8-420d-800d-53b189a55626-catalog-content\") pod \"00ecec7e-d3d8-420d-800d-53b189a55626\" (UID: \"00ecec7e-d3d8-420d-800d-53b189a55626\") " Feb 02 11:24:15 crc kubenswrapper[4925]: I0202 11:24:15.688008 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00ecec7e-d3d8-420d-800d-53b189a55626-utilities\") pod \"00ecec7e-d3d8-420d-800d-53b189a55626\" (UID: \"00ecec7e-d3d8-420d-800d-53b189a55626\") " Feb 02 11:24:15 crc kubenswrapper[4925]: I0202 11:24:15.688970 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00ecec7e-d3d8-420d-800d-53b189a55626-utilities" (OuterVolumeSpecName: "utilities") pod "00ecec7e-d3d8-420d-800d-53b189a55626" (UID: "00ecec7e-d3d8-420d-800d-53b189a55626"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:24:15 crc kubenswrapper[4925]: I0202 11:24:15.694280 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00ecec7e-d3d8-420d-800d-53b189a55626-kube-api-access-7wp9c" (OuterVolumeSpecName: "kube-api-access-7wp9c") pod "00ecec7e-d3d8-420d-800d-53b189a55626" (UID: "00ecec7e-d3d8-420d-800d-53b189a55626"). InnerVolumeSpecName "kube-api-access-7wp9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:24:15 crc kubenswrapper[4925]: I0202 11:24:15.739860 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00ecec7e-d3d8-420d-800d-53b189a55626-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00ecec7e-d3d8-420d-800d-53b189a55626" (UID: "00ecec7e-d3d8-420d-800d-53b189a55626"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:24:15 crc kubenswrapper[4925]: I0202 11:24:15.789897 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00ecec7e-d3d8-420d-800d-53b189a55626-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:24:15 crc kubenswrapper[4925]: I0202 11:24:15.789941 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00ecec7e-d3d8-420d-800d-53b189a55626-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:24:15 crc kubenswrapper[4925]: I0202 11:24:15.789955 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wp9c\" (UniqueName: \"kubernetes.io/projected/00ecec7e-d3d8-420d-800d-53b189a55626-kube-api-access-7wp9c\") on node \"crc\" DevicePath \"\"" Feb 02 11:24:16 crc kubenswrapper[4925]: I0202 11:24:16.167652 4925 generic.go:334] "Generic (PLEG): container finished" podID="00ecec7e-d3d8-420d-800d-53b189a55626" containerID="d5f8ca43f7267bab2383972ce5008713935aae74bf4b30e53642d3a0de9178eb" exitCode=0 Feb 02 11:24:16 crc kubenswrapper[4925]: I0202 11:24:16.167706 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pw7bb" event={"ID":"00ecec7e-d3d8-420d-800d-53b189a55626","Type":"ContainerDied","Data":"d5f8ca43f7267bab2383972ce5008713935aae74bf4b30e53642d3a0de9178eb"} Feb 02 11:24:16 crc kubenswrapper[4925]: I0202 11:24:16.167730 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pw7bb" Feb 02 11:24:16 crc kubenswrapper[4925]: I0202 11:24:16.167744 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pw7bb" event={"ID":"00ecec7e-d3d8-420d-800d-53b189a55626","Type":"ContainerDied","Data":"96f613ed4afe4ca8a9240a4a67e42b5bdd29aaff00d3532d19604eb3337e2fe3"} Feb 02 11:24:16 crc kubenswrapper[4925]: I0202 11:24:16.167790 4925 scope.go:117] "RemoveContainer" containerID="d5f8ca43f7267bab2383972ce5008713935aae74bf4b30e53642d3a0de9178eb" Feb 02 11:24:16 crc kubenswrapper[4925]: I0202 11:24:16.191010 4925 scope.go:117] "RemoveContainer" containerID="a5dcaf02b2e8ed604f605343321be486a5a03fc8d9039d57ee5bb37cb66ff436" Feb 02 11:24:16 crc kubenswrapper[4925]: I0202 11:24:16.205672 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pw7bb"] Feb 02 11:24:16 crc kubenswrapper[4925]: I0202 11:24:16.213678 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pw7bb"] Feb 02 11:24:16 crc kubenswrapper[4925]: I0202 11:24:16.657889 4925 scope.go:117] "RemoveContainer" containerID="f9315a846b1854ce1b321202450410018bf5fd03e966496d6ec16e117a9b1c80" Feb 02 11:24:16 crc kubenswrapper[4925]: I0202 11:24:16.677439 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00ecec7e-d3d8-420d-800d-53b189a55626" path="/var/lib/kubelet/pods/00ecec7e-d3d8-420d-800d-53b189a55626/volumes" Feb 02 11:24:16 crc kubenswrapper[4925]: I0202 11:24:16.692237 4925 scope.go:117] "RemoveContainer" containerID="d5f8ca43f7267bab2383972ce5008713935aae74bf4b30e53642d3a0de9178eb" Feb 02 11:24:16 crc kubenswrapper[4925]: E0202 11:24:16.692813 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5f8ca43f7267bab2383972ce5008713935aae74bf4b30e53642d3a0de9178eb\": container with ID starting with d5f8ca43f7267bab2383972ce5008713935aae74bf4b30e53642d3a0de9178eb not found: ID does not exist" containerID="d5f8ca43f7267bab2383972ce5008713935aae74bf4b30e53642d3a0de9178eb" Feb 02 11:24:16 crc kubenswrapper[4925]: I0202 11:24:16.692854 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5f8ca43f7267bab2383972ce5008713935aae74bf4b30e53642d3a0de9178eb"} err="failed to get container status \"d5f8ca43f7267bab2383972ce5008713935aae74bf4b30e53642d3a0de9178eb\": rpc error: code = NotFound desc = could not find container \"d5f8ca43f7267bab2383972ce5008713935aae74bf4b30e53642d3a0de9178eb\": container with ID starting with d5f8ca43f7267bab2383972ce5008713935aae74bf4b30e53642d3a0de9178eb not found: ID does not exist" Feb 02 11:24:16 crc kubenswrapper[4925]: I0202 11:24:16.692882 4925 scope.go:117] "RemoveContainer" containerID="a5dcaf02b2e8ed604f605343321be486a5a03fc8d9039d57ee5bb37cb66ff436" Feb 02 11:24:16 crc kubenswrapper[4925]: E0202 11:24:16.693306 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5dcaf02b2e8ed604f605343321be486a5a03fc8d9039d57ee5bb37cb66ff436\": container with ID starting with a5dcaf02b2e8ed604f605343321be486a5a03fc8d9039d57ee5bb37cb66ff436 not found: ID does not exist" containerID="a5dcaf02b2e8ed604f605343321be486a5a03fc8d9039d57ee5bb37cb66ff436" Feb 02 11:24:16 crc kubenswrapper[4925]: I0202 11:24:16.693351 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5dcaf02b2e8ed604f605343321be486a5a03fc8d9039d57ee5bb37cb66ff436"} err="failed to get container status \"a5dcaf02b2e8ed604f605343321be486a5a03fc8d9039d57ee5bb37cb66ff436\": rpc error: code = NotFound desc = could not find container \"a5dcaf02b2e8ed604f605343321be486a5a03fc8d9039d57ee5bb37cb66ff436\": container with ID starting with a5dcaf02b2e8ed604f605343321be486a5a03fc8d9039d57ee5bb37cb66ff436 not found: ID does not exist" Feb 02 11:24:16 crc kubenswrapper[4925]: I0202 11:24:16.693375 4925 scope.go:117] "RemoveContainer" containerID="f9315a846b1854ce1b321202450410018bf5fd03e966496d6ec16e117a9b1c80" Feb 02 11:24:16 crc kubenswrapper[4925]: E0202 11:24:16.693786 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9315a846b1854ce1b321202450410018bf5fd03e966496d6ec16e117a9b1c80\": container with ID starting with f9315a846b1854ce1b321202450410018bf5fd03e966496d6ec16e117a9b1c80 not found: ID does not exist" containerID="f9315a846b1854ce1b321202450410018bf5fd03e966496d6ec16e117a9b1c80" Feb 02 11:24:16 crc kubenswrapper[4925]: I0202 11:24:16.693808 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9315a846b1854ce1b321202450410018bf5fd03e966496d6ec16e117a9b1c80"} err="failed to get container status \"f9315a846b1854ce1b321202450410018bf5fd03e966496d6ec16e117a9b1c80\": rpc error: code = NotFound desc = could not find container \"f9315a846b1854ce1b321202450410018bf5fd03e966496d6ec16e117a9b1c80\": container with ID starting with f9315a846b1854ce1b321202450410018bf5fd03e966496d6ec16e117a9b1c80 not found: ID does not exist" Feb 02 11:24:24 crc kubenswrapper[4925]: I0202 11:24:24.672948 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:24:24 crc kubenswrapper[4925]: E0202 11:24:24.674733 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:24:26 crc kubenswrapper[4925]: I0202 11:24:26.962124 4925 scope.go:117] "RemoveContainer" containerID="04bf78182aa9ebc59472bda9dd173eb78f57f80908346051af91c2ca8274c61a" Feb 02 11:24:36 crc kubenswrapper[4925]: I0202 11:24:36.664602 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:24:36 crc kubenswrapper[4925]: E0202 11:24:36.665391 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:24:48 crc kubenswrapper[4925]: I0202 11:24:48.664838 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:24:48 crc kubenswrapper[4925]: E0202 11:24:48.665625 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:25:01 crc kubenswrapper[4925]: I0202 11:25:01.665658 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:25:01 crc kubenswrapper[4925]: E0202 11:25:01.666412 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:25:14 crc kubenswrapper[4925]: I0202 11:25:14.670926 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:25:14 crc kubenswrapper[4925]: E0202 11:25:14.673438 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:25:28 crc kubenswrapper[4925]: I0202 11:25:28.664601 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:25:28 crc kubenswrapper[4925]: E0202 11:25:28.665581 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:25:43 crc kubenswrapper[4925]: I0202 11:25:43.665273 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:25:43 crc kubenswrapper[4925]: E0202 11:25:43.666274 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:25:54 crc kubenswrapper[4925]: I0202 11:25:54.672811 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:25:54 crc kubenswrapper[4925]: E0202 11:25:54.673754 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:26:08 crc kubenswrapper[4925]: I0202 11:26:08.664162 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:26:08 crc kubenswrapper[4925]: E0202 11:26:08.664995 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:26:13 crc kubenswrapper[4925]: I0202 11:26:13.193572 4925 generic.go:334] "Generic (PLEG): container finished" podID="e9f63e74-c179-41bc-a05e-8a374a9710b7" containerID="ca1a9e29b145438a862ed167e666138a4d2e16822ca661995ed7edd10a07469e" exitCode=0 Feb 02 11:26:13 crc kubenswrapper[4925]: I0202 11:26:13.193663 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" event={"ID":"e9f63e74-c179-41bc-a05e-8a374a9710b7","Type":"ContainerDied","Data":"ca1a9e29b145438a862ed167e666138a4d2e16822ca661995ed7edd10a07469e"} Feb 02 11:26:14 crc kubenswrapper[4925]: I0202 11:26:14.586344 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" Feb 02 11:26:14 crc kubenswrapper[4925]: I0202 11:26:14.734787 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-inventory\") pod \"e9f63e74-c179-41bc-a05e-8a374a9710b7\" (UID: \"e9f63e74-c179-41bc-a05e-8a374a9710b7\") " Feb 02 11:26:14 crc kubenswrapper[4925]: I0202 11:26:14.735565 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-ssh-key-openstack-edpm-ipam\") pod \"e9f63e74-c179-41bc-a05e-8a374a9710b7\" (UID: \"e9f63e74-c179-41bc-a05e-8a374a9710b7\") " Feb 02 11:26:14 crc kubenswrapper[4925]: I0202 11:26:14.735666 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7brt6\" (UniqueName: \"kubernetes.io/projected/e9f63e74-c179-41bc-a05e-8a374a9710b7-kube-api-access-7brt6\") pod \"e9f63e74-c179-41bc-a05e-8a374a9710b7\" (UID: \"e9f63e74-c179-41bc-a05e-8a374a9710b7\") " Feb 02 11:26:14 crc kubenswrapper[4925]: I0202 11:26:14.735690 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-bootstrap-combined-ca-bundle\") pod \"e9f63e74-c179-41bc-a05e-8a374a9710b7\" (UID: \"e9f63e74-c179-41bc-a05e-8a374a9710b7\") " Feb 02 11:26:14 crc kubenswrapper[4925]: I0202 11:26:14.741227 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9f63e74-c179-41bc-a05e-8a374a9710b7-kube-api-access-7brt6" (OuterVolumeSpecName: "kube-api-access-7brt6") pod "e9f63e74-c179-41bc-a05e-8a374a9710b7" (UID: "e9f63e74-c179-41bc-a05e-8a374a9710b7"). InnerVolumeSpecName "kube-api-access-7brt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:26:14 crc kubenswrapper[4925]: I0202 11:26:14.742044 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e9f63e74-c179-41bc-a05e-8a374a9710b7" (UID: "e9f63e74-c179-41bc-a05e-8a374a9710b7"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:26:14 crc kubenswrapper[4925]: I0202 11:26:14.760924 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e9f63e74-c179-41bc-a05e-8a374a9710b7" (UID: "e9f63e74-c179-41bc-a05e-8a374a9710b7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:26:14 crc kubenswrapper[4925]: I0202 11:26:14.761175 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-inventory" (OuterVolumeSpecName: "inventory") pod "e9f63e74-c179-41bc-a05e-8a374a9710b7" (UID: "e9f63e74-c179-41bc-a05e-8a374a9710b7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:26:14 crc kubenswrapper[4925]: I0202 11:26:14.838302 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:14 crc kubenswrapper[4925]: I0202 11:26:14.838670 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7brt6\" (UniqueName: \"kubernetes.io/projected/e9f63e74-c179-41bc-a05e-8a374a9710b7-kube-api-access-7brt6\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:14 crc kubenswrapper[4925]: I0202 11:26:14.838776 4925 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:14 crc kubenswrapper[4925]: I0202 11:26:14.838860 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9f63e74-c179-41bc-a05e-8a374a9710b7-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.213329 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" event={"ID":"e9f63e74-c179-41bc-a05e-8a374a9710b7","Type":"ContainerDied","Data":"96f5e7fe39e0fc16463018d01e680243ec6b764359285fb5d752fa3d1f1484f4"} Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.213387 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96f5e7fe39e0fc16463018d01e680243ec6b764359285fb5d752fa3d1f1484f4" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.213442 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.303293 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf"] Feb 02 11:26:15 crc kubenswrapper[4925]: E0202 11:26:15.303736 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ecec7e-d3d8-420d-800d-53b189a55626" containerName="registry-server" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.303761 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ecec7e-d3d8-420d-800d-53b189a55626" containerName="registry-server" Feb 02 11:26:15 crc kubenswrapper[4925]: E0202 11:26:15.303778 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ecec7e-d3d8-420d-800d-53b189a55626" containerName="extract-content" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.303787 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ecec7e-d3d8-420d-800d-53b189a55626" containerName="extract-content" Feb 02 11:26:15 crc kubenswrapper[4925]: E0202 11:26:15.303804 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ecec7e-d3d8-420d-800d-53b189a55626" containerName="extract-utilities" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.303814 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ecec7e-d3d8-420d-800d-53b189a55626" containerName="extract-utilities" Feb 02 11:26:15 crc kubenswrapper[4925]: E0202 11:26:15.303830 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f63e74-c179-41bc-a05e-8a374a9710b7" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.303840 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f63e74-c179-41bc-a05e-8a374a9710b7" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.304040 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ecec7e-d3d8-420d-800d-53b189a55626" containerName="registry-server" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.304068 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f63e74-c179-41bc-a05e-8a374a9710b7" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.305651 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.307949 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.308249 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.308425 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.308615 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.311852 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf"] Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.448750 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hzsw\" (UniqueName: \"kubernetes.io/projected/dd48f74f-90ff-4eee-bc12-cc30de87d165-kube-api-access-5hzsw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf\" (UID: \"dd48f74f-90ff-4eee-bc12-cc30de87d165\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.448911 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd48f74f-90ff-4eee-bc12-cc30de87d165-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf\" (UID: \"dd48f74f-90ff-4eee-bc12-cc30de87d165\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.449047 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd48f74f-90ff-4eee-bc12-cc30de87d165-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf\" (UID: \"dd48f74f-90ff-4eee-bc12-cc30de87d165\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.550243 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd48f74f-90ff-4eee-bc12-cc30de87d165-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf\" (UID: \"dd48f74f-90ff-4eee-bc12-cc30de87d165\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.550335 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd48f74f-90ff-4eee-bc12-cc30de87d165-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf\" (UID: \"dd48f74f-90ff-4eee-bc12-cc30de87d165\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.550406 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hzsw\" (UniqueName: \"kubernetes.io/projected/dd48f74f-90ff-4eee-bc12-cc30de87d165-kube-api-access-5hzsw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf\" (UID: \"dd48f74f-90ff-4eee-bc12-cc30de87d165\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.555661 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd48f74f-90ff-4eee-bc12-cc30de87d165-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf\" (UID: \"dd48f74f-90ff-4eee-bc12-cc30de87d165\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.555752 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd48f74f-90ff-4eee-bc12-cc30de87d165-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf\" (UID: \"dd48f74f-90ff-4eee-bc12-cc30de87d165\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.573397 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hzsw\" (UniqueName: \"kubernetes.io/projected/dd48f74f-90ff-4eee-bc12-cc30de87d165-kube-api-access-5hzsw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf\" (UID: \"dd48f74f-90ff-4eee-bc12-cc30de87d165\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" Feb 02 11:26:15 crc kubenswrapper[4925]: I0202 11:26:15.624765 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" Feb 02 11:26:16 crc kubenswrapper[4925]: I0202 11:26:16.130713 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf"] Feb 02 11:26:16 crc kubenswrapper[4925]: W0202 11:26:16.135120 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd48f74f_90ff_4eee_bc12_cc30de87d165.slice/crio-14b09973f99d25a993097ab1c43d70ea3735052080cec10d6649400498429507 WatchSource:0}: Error finding container 14b09973f99d25a993097ab1c43d70ea3735052080cec10d6649400498429507: Status 404 returned error can't find the container with id 14b09973f99d25a993097ab1c43d70ea3735052080cec10d6649400498429507 Feb 02 11:26:16 crc kubenswrapper[4925]: I0202 11:26:16.221812 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" event={"ID":"dd48f74f-90ff-4eee-bc12-cc30de87d165","Type":"ContainerStarted","Data":"14b09973f99d25a993097ab1c43d70ea3735052080cec10d6649400498429507"} Feb 02 11:26:18 crc kubenswrapper[4925]: I0202 11:26:18.477502 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" event={"ID":"dd48f74f-90ff-4eee-bc12-cc30de87d165","Type":"ContainerStarted","Data":"c5190dcc7a541c402fb0b9ca1e02468e1c76dac0917fc01cca7ae56ac3d0af4a"} Feb 02 11:26:18 crc kubenswrapper[4925]: I0202 11:26:18.497321 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" podStartSLOduration=1.801224968 podStartE2EDuration="3.497299744s" podCreationTimestamp="2026-02-02 11:26:15 +0000 UTC" firstStartedPulling="2026-02-02 11:26:16.138179278 +0000 UTC m=+1753.142428240" lastFinishedPulling="2026-02-02 11:26:17.834254054 +0000 UTC m=+1754.838503016" observedRunningTime="2026-02-02 11:26:18.492792073 +0000 UTC m=+1755.497041035" watchObservedRunningTime="2026-02-02 11:26:18.497299744 +0000 UTC m=+1755.501548706" Feb 02 11:26:21 crc kubenswrapper[4925]: I0202 11:26:21.665139 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:26:21 crc kubenswrapper[4925]: E0202 11:26:21.666006 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:26:33 crc kubenswrapper[4925]: I0202 11:26:33.664127 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:26:33 crc kubenswrapper[4925]: E0202 11:26:33.665162 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:26:43 crc kubenswrapper[4925]: I0202 11:26:43.095681 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fl5qj"] Feb 02 11:26:43 crc kubenswrapper[4925]: I0202 11:26:43.098510 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fl5qj" Feb 02 11:26:43 crc kubenswrapper[4925]: I0202 11:26:43.131817 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6hf5\" (UniqueName: \"kubernetes.io/projected/fb42f46b-de91-4d28-baf4-37687b7d0572-kube-api-access-f6hf5\") pod \"redhat-operators-fl5qj\" (UID: \"fb42f46b-de91-4d28-baf4-37687b7d0572\") " pod="openshift-marketplace/redhat-operators-fl5qj" Feb 02 11:26:43 crc kubenswrapper[4925]: I0202 11:26:43.131932 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb42f46b-de91-4d28-baf4-37687b7d0572-catalog-content\") pod \"redhat-operators-fl5qj\" (UID: \"fb42f46b-de91-4d28-baf4-37687b7d0572\") " pod="openshift-marketplace/redhat-operators-fl5qj" Feb 02 11:26:43 crc kubenswrapper[4925]: I0202 11:26:43.132059 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb42f46b-de91-4d28-baf4-37687b7d0572-utilities\") pod \"redhat-operators-fl5qj\" (UID: \"fb42f46b-de91-4d28-baf4-37687b7d0572\") " pod="openshift-marketplace/redhat-operators-fl5qj" Feb 02 11:26:43 crc kubenswrapper[4925]: I0202 11:26:43.134029 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fl5qj"] Feb 02 11:26:43 crc kubenswrapper[4925]: I0202 11:26:43.234288 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6hf5\" (UniqueName: \"kubernetes.io/projected/fb42f46b-de91-4d28-baf4-37687b7d0572-kube-api-access-f6hf5\") pod \"redhat-operators-fl5qj\" (UID: \"fb42f46b-de91-4d28-baf4-37687b7d0572\") " pod="openshift-marketplace/redhat-operators-fl5qj" Feb 02 11:26:43 crc kubenswrapper[4925]: I0202 11:26:43.234368 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb42f46b-de91-4d28-baf4-37687b7d0572-catalog-content\") pod \"redhat-operators-fl5qj\" (UID: \"fb42f46b-de91-4d28-baf4-37687b7d0572\") " pod="openshift-marketplace/redhat-operators-fl5qj" Feb 02 11:26:43 crc kubenswrapper[4925]: I0202 11:26:43.234452 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb42f46b-de91-4d28-baf4-37687b7d0572-utilities\") pod \"redhat-operators-fl5qj\" (UID: \"fb42f46b-de91-4d28-baf4-37687b7d0572\") " pod="openshift-marketplace/redhat-operators-fl5qj" Feb 02 11:26:43 crc kubenswrapper[4925]: I0202 11:26:43.235119 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb42f46b-de91-4d28-baf4-37687b7d0572-utilities\") pod \"redhat-operators-fl5qj\" (UID: \"fb42f46b-de91-4d28-baf4-37687b7d0572\") " pod="openshift-marketplace/redhat-operators-fl5qj" Feb 02 11:26:43 crc kubenswrapper[4925]: I0202 11:26:43.235193 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb42f46b-de91-4d28-baf4-37687b7d0572-catalog-content\") pod \"redhat-operators-fl5qj\" (UID: \"fb42f46b-de91-4d28-baf4-37687b7d0572\") " pod="openshift-marketplace/redhat-operators-fl5qj" Feb 02 11:26:43 crc kubenswrapper[4925]: I0202 11:26:43.253057 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6hf5\" (UniqueName: \"kubernetes.io/projected/fb42f46b-de91-4d28-baf4-37687b7d0572-kube-api-access-f6hf5\") pod \"redhat-operators-fl5qj\" (UID: \"fb42f46b-de91-4d28-baf4-37687b7d0572\") " pod="openshift-marketplace/redhat-operators-fl5qj" Feb 02 11:26:43 crc kubenswrapper[4925]: I0202 11:26:43.472024 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fl5qj" Feb 02 11:26:43 crc kubenswrapper[4925]: I0202 11:26:43.936766 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fl5qj"] Feb 02 11:26:44 crc kubenswrapper[4925]: I0202 11:26:44.687825 4925 generic.go:334] "Generic (PLEG): container finished" podID="fb42f46b-de91-4d28-baf4-37687b7d0572" containerID="c529cc977b0343960756262f6f9653df0f24bf67048b26c12432b9671c3a65fd" exitCode=0 Feb 02 11:26:44 crc kubenswrapper[4925]: I0202 11:26:44.687992 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl5qj" event={"ID":"fb42f46b-de91-4d28-baf4-37687b7d0572","Type":"ContainerDied","Data":"c529cc977b0343960756262f6f9653df0f24bf67048b26c12432b9671c3a65fd"} Feb 02 11:26:44 crc kubenswrapper[4925]: I0202 11:26:44.689150 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl5qj" event={"ID":"fb42f46b-de91-4d28-baf4-37687b7d0572","Type":"ContainerStarted","Data":"9e38d0a54dacac078cdda636476a31a7b061cb86a146fc8e4c9a8ece2a321700"} Feb 02 11:26:45 crc kubenswrapper[4925]: I0202 11:26:45.665195 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:26:45 crc kubenswrapper[4925]: E0202 11:26:45.666273 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:26:46 crc kubenswrapper[4925]: I0202 11:26:46.717849 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl5qj" event={"ID":"fb42f46b-de91-4d28-baf4-37687b7d0572","Type":"ContainerStarted","Data":"4408fb7b216df66717c4ef5f02358e9f07aad4606171dc734b13592a7e2ccead"} Feb 02 11:26:47 crc kubenswrapper[4925]: I0202 11:26:47.737720 4925 generic.go:334] "Generic (PLEG): container finished" podID="fb42f46b-de91-4d28-baf4-37687b7d0572" containerID="4408fb7b216df66717c4ef5f02358e9f07aad4606171dc734b13592a7e2ccead" exitCode=0 Feb 02 11:26:47 crc kubenswrapper[4925]: I0202 11:26:47.737820 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl5qj" event={"ID":"fb42f46b-de91-4d28-baf4-37687b7d0572","Type":"ContainerDied","Data":"4408fb7b216df66717c4ef5f02358e9f07aad4606171dc734b13592a7e2ccead"} Feb 02 11:26:49 crc kubenswrapper[4925]: I0202 11:26:49.757542 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl5qj" event={"ID":"fb42f46b-de91-4d28-baf4-37687b7d0572","Type":"ContainerStarted","Data":"d39aba6bea0c032257922c250612c75f61e7fb963b287a1ef9f90b6a83ab28e2"} Feb 02 11:26:49 crc kubenswrapper[4925]: I0202 11:26:49.788164 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fl5qj" podStartSLOduration=2.360494631 podStartE2EDuration="6.788143246s" podCreationTimestamp="2026-02-02 11:26:43 +0000 UTC" firstStartedPulling="2026-02-02 11:26:44.689107277 +0000 UTC m=+1781.693356239" lastFinishedPulling="2026-02-02 11:26:49.116755892 +0000 UTC m=+1786.121004854" observedRunningTime="2026-02-02 11:26:49.77862136 +0000 UTC m=+1786.782870342" watchObservedRunningTime="2026-02-02 11:26:49.788143246 +0000 UTC m=+1786.792392208" Feb 02 11:26:53 crc kubenswrapper[4925]: I0202 11:26:53.473540 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fl5qj" Feb 02 11:26:53 crc kubenswrapper[4925]: I0202 11:26:53.473920 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fl5qj" Feb 02 11:26:54 crc kubenswrapper[4925]: I0202 11:26:54.524897 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fl5qj" podUID="fb42f46b-de91-4d28-baf4-37687b7d0572" containerName="registry-server" probeResult="failure" output=< Feb 02 11:26:54 crc kubenswrapper[4925]: timeout: failed to connect service ":50051" within 1s Feb 02 11:26:54 crc kubenswrapper[4925]: > Feb 02 11:26:59 crc kubenswrapper[4925]: I0202 11:26:59.664323 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:26:59 crc kubenswrapper[4925]: E0202 11:26:59.665974 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:27:03 crc kubenswrapper[4925]: I0202 11:27:03.517111 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fl5qj" Feb 02 11:27:03 crc kubenswrapper[4925]: I0202 11:27:03.560645 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fl5qj" Feb 02 11:27:03 crc kubenswrapper[4925]: I0202 11:27:03.758095 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fl5qj"] Feb 02 11:27:04 crc kubenswrapper[4925]: I0202 11:27:04.883376 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fl5qj" podUID="fb42f46b-de91-4d28-baf4-37687b7d0572" containerName="registry-server" containerID="cri-o://d39aba6bea0c032257922c250612c75f61e7fb963b287a1ef9f90b6a83ab28e2" gracePeriod=2 Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.344193 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fl5qj" Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.433119 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6hf5\" (UniqueName: \"kubernetes.io/projected/fb42f46b-de91-4d28-baf4-37687b7d0572-kube-api-access-f6hf5\") pod \"fb42f46b-de91-4d28-baf4-37687b7d0572\" (UID: \"fb42f46b-de91-4d28-baf4-37687b7d0572\") " Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.433305 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb42f46b-de91-4d28-baf4-37687b7d0572-utilities\") pod \"fb42f46b-de91-4d28-baf4-37687b7d0572\" (UID: \"fb42f46b-de91-4d28-baf4-37687b7d0572\") " Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.433351 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb42f46b-de91-4d28-baf4-37687b7d0572-catalog-content\") pod \"fb42f46b-de91-4d28-baf4-37687b7d0572\" (UID: \"fb42f46b-de91-4d28-baf4-37687b7d0572\") " Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.434252 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb42f46b-de91-4d28-baf4-37687b7d0572-utilities" (OuterVolumeSpecName: "utilities") pod "fb42f46b-de91-4d28-baf4-37687b7d0572" (UID: "fb42f46b-de91-4d28-baf4-37687b7d0572"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.438578 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb42f46b-de91-4d28-baf4-37687b7d0572-kube-api-access-f6hf5" (OuterVolumeSpecName: "kube-api-access-f6hf5") pod "fb42f46b-de91-4d28-baf4-37687b7d0572" (UID: "fb42f46b-de91-4d28-baf4-37687b7d0572"). InnerVolumeSpecName "kube-api-access-f6hf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.535477 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb42f46b-de91-4d28-baf4-37687b7d0572-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.535778 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6hf5\" (UniqueName: \"kubernetes.io/projected/fb42f46b-de91-4d28-baf4-37687b7d0572-kube-api-access-f6hf5\") on node \"crc\" DevicePath \"\"" Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.550197 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb42f46b-de91-4d28-baf4-37687b7d0572-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb42f46b-de91-4d28-baf4-37687b7d0572" (UID: "fb42f46b-de91-4d28-baf4-37687b7d0572"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.637766 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb42f46b-de91-4d28-baf4-37687b7d0572-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.895987 4925 generic.go:334] "Generic (PLEG): container finished" podID="fb42f46b-de91-4d28-baf4-37687b7d0572" containerID="d39aba6bea0c032257922c250612c75f61e7fb963b287a1ef9f90b6a83ab28e2" exitCode=0 Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.896027 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fl5qj" Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.896043 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl5qj" event={"ID":"fb42f46b-de91-4d28-baf4-37687b7d0572","Type":"ContainerDied","Data":"d39aba6bea0c032257922c250612c75f61e7fb963b287a1ef9f90b6a83ab28e2"} Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.896105 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl5qj" event={"ID":"fb42f46b-de91-4d28-baf4-37687b7d0572","Type":"ContainerDied","Data":"9e38d0a54dacac078cdda636476a31a7b061cb86a146fc8e4c9a8ece2a321700"} Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.896170 4925 scope.go:117] "RemoveContainer" containerID="d39aba6bea0c032257922c250612c75f61e7fb963b287a1ef9f90b6a83ab28e2" Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.936819 4925 scope.go:117] "RemoveContainer" containerID="4408fb7b216df66717c4ef5f02358e9f07aad4606171dc734b13592a7e2ccead" Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.940179 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fl5qj"] Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.948406 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fl5qj"] Feb 02 11:27:05 crc kubenswrapper[4925]: I0202 11:27:05.968771 4925 scope.go:117] "RemoveContainer" containerID="c529cc977b0343960756262f6f9653df0f24bf67048b26c12432b9671c3a65fd" Feb 02 11:27:06 crc kubenswrapper[4925]: I0202 11:27:06.011111 4925 scope.go:117] "RemoveContainer" containerID="d39aba6bea0c032257922c250612c75f61e7fb963b287a1ef9f90b6a83ab28e2" Feb 02 11:27:06 crc kubenswrapper[4925]: E0202 11:27:06.011579 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d39aba6bea0c032257922c250612c75f61e7fb963b287a1ef9f90b6a83ab28e2\": container with ID starting with d39aba6bea0c032257922c250612c75f61e7fb963b287a1ef9f90b6a83ab28e2 not found: ID does not exist" containerID="d39aba6bea0c032257922c250612c75f61e7fb963b287a1ef9f90b6a83ab28e2" Feb 02 11:27:06 crc kubenswrapper[4925]: I0202 11:27:06.011610 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d39aba6bea0c032257922c250612c75f61e7fb963b287a1ef9f90b6a83ab28e2"} err="failed to get container status \"d39aba6bea0c032257922c250612c75f61e7fb963b287a1ef9f90b6a83ab28e2\": rpc error: code = NotFound desc = could not find container \"d39aba6bea0c032257922c250612c75f61e7fb963b287a1ef9f90b6a83ab28e2\": container with ID starting with d39aba6bea0c032257922c250612c75f61e7fb963b287a1ef9f90b6a83ab28e2 not found: ID does not exist" Feb 02 11:27:06 crc kubenswrapper[4925]: I0202 11:27:06.011632 4925 scope.go:117] "RemoveContainer" containerID="4408fb7b216df66717c4ef5f02358e9f07aad4606171dc734b13592a7e2ccead" Feb 02 11:27:06 crc kubenswrapper[4925]: E0202 11:27:06.011958 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4408fb7b216df66717c4ef5f02358e9f07aad4606171dc734b13592a7e2ccead\": container with ID starting with 4408fb7b216df66717c4ef5f02358e9f07aad4606171dc734b13592a7e2ccead not found: ID does not exist" containerID="4408fb7b216df66717c4ef5f02358e9f07aad4606171dc734b13592a7e2ccead" Feb 02 11:27:06 crc kubenswrapper[4925]: I0202 11:27:06.011994 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4408fb7b216df66717c4ef5f02358e9f07aad4606171dc734b13592a7e2ccead"} err="failed to get container status \"4408fb7b216df66717c4ef5f02358e9f07aad4606171dc734b13592a7e2ccead\": rpc error: code = NotFound desc = could not find container \"4408fb7b216df66717c4ef5f02358e9f07aad4606171dc734b13592a7e2ccead\": container with ID starting with 4408fb7b216df66717c4ef5f02358e9f07aad4606171dc734b13592a7e2ccead not found: ID does not exist" Feb 02 11:27:06 crc kubenswrapper[4925]: I0202 11:27:06.012011 4925 scope.go:117] "RemoveContainer" containerID="c529cc977b0343960756262f6f9653df0f24bf67048b26c12432b9671c3a65fd" Feb 02 11:27:06 crc kubenswrapper[4925]: E0202 11:27:06.012363 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c529cc977b0343960756262f6f9653df0f24bf67048b26c12432b9671c3a65fd\": container with ID starting with c529cc977b0343960756262f6f9653df0f24bf67048b26c12432b9671c3a65fd not found: ID does not exist" containerID="c529cc977b0343960756262f6f9653df0f24bf67048b26c12432b9671c3a65fd" Feb 02 11:27:06 crc kubenswrapper[4925]: I0202 11:27:06.012464 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c529cc977b0343960756262f6f9653df0f24bf67048b26c12432b9671c3a65fd"} err="failed to get container status \"c529cc977b0343960756262f6f9653df0f24bf67048b26c12432b9671c3a65fd\": rpc error: code = NotFound desc = could not find container \"c529cc977b0343960756262f6f9653df0f24bf67048b26c12432b9671c3a65fd\": container with ID starting with c529cc977b0343960756262f6f9653df0f24bf67048b26c12432b9671c3a65fd not found: ID does not exist" Feb 02 11:27:06 crc kubenswrapper[4925]: I0202 11:27:06.674405 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb42f46b-de91-4d28-baf4-37687b7d0572" path="/var/lib/kubelet/pods/fb42f46b-de91-4d28-baf4-37687b7d0572/volumes" Feb 02 11:27:13 crc kubenswrapper[4925]: I0202 11:27:13.034623 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-qpjpr"] Feb 02 11:27:13 crc kubenswrapper[4925]: I0202 11:27:13.043800 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-tz6vw"] Feb 02 11:27:13 crc kubenswrapper[4925]: I0202 11:27:13.051552 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-qpjpr"] Feb 02 11:27:13 crc kubenswrapper[4925]: I0202 11:27:13.058936 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-tz6vw"] Feb 02 11:27:14 crc kubenswrapper[4925]: I0202 11:27:14.028348 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-cvgcr"] Feb 02 11:27:14 crc kubenswrapper[4925]: I0202 11:27:14.039854 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-cvgcr"] Feb 02 11:27:14 crc kubenswrapper[4925]: I0202 11:27:14.665845 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:27:14 crc kubenswrapper[4925]: E0202 11:27:14.666512 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:27:14 crc kubenswrapper[4925]: I0202 11:27:14.675576 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050787f9-1101-4195-9b1b-1f1b5fa090cd" path="/var/lib/kubelet/pods/050787f9-1101-4195-9b1b-1f1b5fa090cd/volumes" Feb 02 11:27:14 crc kubenswrapper[4925]: I0202 11:27:14.676289 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf" path="/var/lib/kubelet/pods/07acf1b2-28bd-41ce-bf7f-08c5b2c5ebaf/volumes" Feb 02 11:27:14 crc kubenswrapper[4925]: I0202 11:27:14.676882 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7159077-105a-48ca-96f6-12ffb19c7a93" path="/var/lib/kubelet/pods/d7159077-105a-48ca-96f6-12ffb19c7a93/volumes" Feb 02 11:27:17 crc kubenswrapper[4925]: I0202 11:27:17.025635 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a0ad-account-create-update-5n6nq"] Feb 02 11:27:17 crc kubenswrapper[4925]: I0202 11:27:17.034709 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a0ad-account-create-update-5n6nq"] Feb 02 11:27:18 crc kubenswrapper[4925]: I0202 11:27:18.674616 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db5a7454-06cf-426b-93f7-8d0c2b0a27d2" path="/var/lib/kubelet/pods/db5a7454-06cf-426b-93f7-8d0c2b0a27d2/volumes" Feb 02 11:27:19 crc kubenswrapper[4925]: I0202 11:27:19.035236 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-efb2-account-create-update-tnvpf"] Feb 02 11:27:19 crc kubenswrapper[4925]: I0202 11:27:19.043621 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b182-account-create-update-sxbt4"] Feb 02 11:27:19 crc kubenswrapper[4925]: I0202 11:27:19.052988 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-b182-account-create-update-sxbt4"] Feb 02 11:27:19 crc kubenswrapper[4925]: I0202 11:27:19.060645 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-efb2-account-create-update-tnvpf"] Feb 02 11:27:20 crc kubenswrapper[4925]: I0202 11:27:20.676339 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7070fb66-e433-4f35-b5a7-b4666e8c0638" path="/var/lib/kubelet/pods/7070fb66-e433-4f35-b5a7-b4666e8c0638/volumes" Feb 02 11:27:20 crc kubenswrapper[4925]: I0202 11:27:20.677594 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5081cd7-3a75-408b-a30d-eb1156689a3d" path="/var/lib/kubelet/pods/e5081cd7-3a75-408b-a30d-eb1156689a3d/volumes" Feb 02 11:27:23 crc kubenswrapper[4925]: I0202 11:27:23.039204 4925 generic.go:334] "Generic (PLEG): container finished" podID="dd48f74f-90ff-4eee-bc12-cc30de87d165" containerID="c5190dcc7a541c402fb0b9ca1e02468e1c76dac0917fc01cca7ae56ac3d0af4a" exitCode=0 Feb 02 11:27:23 crc kubenswrapper[4925]: I0202 11:27:23.039300 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" event={"ID":"dd48f74f-90ff-4eee-bc12-cc30de87d165","Type":"ContainerDied","Data":"c5190dcc7a541c402fb0b9ca1e02468e1c76dac0917fc01cca7ae56ac3d0af4a"} Feb 02 11:27:24 crc kubenswrapper[4925]: I0202 11:27:24.499500 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" Feb 02 11:27:24 crc kubenswrapper[4925]: I0202 11:27:24.576859 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd48f74f-90ff-4eee-bc12-cc30de87d165-inventory\") pod \"dd48f74f-90ff-4eee-bc12-cc30de87d165\" (UID: \"dd48f74f-90ff-4eee-bc12-cc30de87d165\") " Feb 02 11:27:24 crc kubenswrapper[4925]: I0202 11:27:24.577225 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hzsw\" (UniqueName: \"kubernetes.io/projected/dd48f74f-90ff-4eee-bc12-cc30de87d165-kube-api-access-5hzsw\") pod \"dd48f74f-90ff-4eee-bc12-cc30de87d165\" (UID: \"dd48f74f-90ff-4eee-bc12-cc30de87d165\") " Feb 02 11:27:24 crc kubenswrapper[4925]: I0202 11:27:24.577412 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd48f74f-90ff-4eee-bc12-cc30de87d165-ssh-key-openstack-edpm-ipam\") pod \"dd48f74f-90ff-4eee-bc12-cc30de87d165\" (UID: \"dd48f74f-90ff-4eee-bc12-cc30de87d165\") " Feb 02 11:27:24 crc kubenswrapper[4925]: I0202 11:27:24.582393 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd48f74f-90ff-4eee-bc12-cc30de87d165-kube-api-access-5hzsw" (OuterVolumeSpecName: "kube-api-access-5hzsw") pod "dd48f74f-90ff-4eee-bc12-cc30de87d165" (UID: "dd48f74f-90ff-4eee-bc12-cc30de87d165"). InnerVolumeSpecName "kube-api-access-5hzsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:27:24 crc kubenswrapper[4925]: I0202 11:27:24.601992 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd48f74f-90ff-4eee-bc12-cc30de87d165-inventory" (OuterVolumeSpecName: "inventory") pod "dd48f74f-90ff-4eee-bc12-cc30de87d165" (UID: "dd48f74f-90ff-4eee-bc12-cc30de87d165"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:27:24 crc kubenswrapper[4925]: I0202 11:27:24.603032 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd48f74f-90ff-4eee-bc12-cc30de87d165-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dd48f74f-90ff-4eee-bc12-cc30de87d165" (UID: "dd48f74f-90ff-4eee-bc12-cc30de87d165"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:27:24 crc kubenswrapper[4925]: I0202 11:27:24.680371 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hzsw\" (UniqueName: \"kubernetes.io/projected/dd48f74f-90ff-4eee-bc12-cc30de87d165-kube-api-access-5hzsw\") on node \"crc\" DevicePath \"\"" Feb 02 11:27:24 crc kubenswrapper[4925]: I0202 11:27:24.680410 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd48f74f-90ff-4eee-bc12-cc30de87d165-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:27:24 crc kubenswrapper[4925]: I0202 11:27:24.680422 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd48f74f-90ff-4eee-bc12-cc30de87d165-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.054982 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" event={"ID":"dd48f74f-90ff-4eee-bc12-cc30de87d165","Type":"ContainerDied","Data":"14b09973f99d25a993097ab1c43d70ea3735052080cec10d6649400498429507"} Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.055029 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14b09973f99d25a993097ab1c43d70ea3735052080cec10d6649400498429507" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.055100 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.141548 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl"] Feb 02 11:27:25 crc kubenswrapper[4925]: E0202 11:27:25.142295 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb42f46b-de91-4d28-baf4-37687b7d0572" containerName="extract-utilities" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.142315 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb42f46b-de91-4d28-baf4-37687b7d0572" containerName="extract-utilities" Feb 02 11:27:25 crc kubenswrapper[4925]: E0202 11:27:25.142328 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb42f46b-de91-4d28-baf4-37687b7d0572" containerName="extract-content" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.142334 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb42f46b-de91-4d28-baf4-37687b7d0572" containerName="extract-content" Feb 02 11:27:25 crc kubenswrapper[4925]: E0202 11:27:25.142350 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd48f74f-90ff-4eee-bc12-cc30de87d165" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.142377 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd48f74f-90ff-4eee-bc12-cc30de87d165" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:27:25 crc kubenswrapper[4925]: E0202 11:27:25.142399 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb42f46b-de91-4d28-baf4-37687b7d0572" containerName="registry-server" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.142406 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb42f46b-de91-4d28-baf4-37687b7d0572" containerName="registry-server" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.142563 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd48f74f-90ff-4eee-bc12-cc30de87d165" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.142576 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb42f46b-de91-4d28-baf4-37687b7d0572" containerName="registry-server" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.143215 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.145768 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.148622 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.149747 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.149958 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.152705 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl"] Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.189227 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl\" (UID: \"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.189621 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcb4r\" (UniqueName: \"kubernetes.io/projected/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-kube-api-access-xcb4r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl\" (UID: \"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.189775 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl\" (UID: \"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.291638 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcb4r\" (UniqueName: \"kubernetes.io/projected/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-kube-api-access-xcb4r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl\" (UID: \"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.291748 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl\" (UID: \"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.291874 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl\" (UID: \"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.296433 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl\" (UID: \"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.296479 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl\" (UID: \"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.307638 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcb4r\" (UniqueName: \"kubernetes.io/projected/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-kube-api-access-xcb4r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl\" (UID: \"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" Feb 02 11:27:25 crc kubenswrapper[4925]: I0202 11:27:25.458781 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" Feb 02 11:27:26 crc kubenswrapper[4925]: I0202 11:27:26.549881 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl"] Feb 02 11:27:26 crc kubenswrapper[4925]: I0202 11:27:26.665201 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:27:26 crc kubenswrapper[4925]: E0202 11:27:26.665483 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:27:27 crc kubenswrapper[4925]: I0202 11:27:27.073264 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" event={"ID":"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae","Type":"ContainerStarted","Data":"70e07c970b35d63e13bad4d2e25726c03af85f24d36c2e6269618da5da587c2f"} Feb 02 11:27:27 crc kubenswrapper[4925]: I0202 11:27:27.090822 4925 scope.go:117] "RemoveContainer" containerID="5eda4a561543db11e81df459f7c307989e1184aa91e335a2b63c30a73bb3bb61" Feb 02 11:27:27 crc kubenswrapper[4925]: I0202 11:27:27.128614 4925 scope.go:117] "RemoveContainer" containerID="d48df7b05a145a6ec40fca6d9424318169933e1edae8259e9729c18784d17f67" Feb 02 11:27:27 crc kubenswrapper[4925]: I0202 11:27:27.154189 4925 scope.go:117] "RemoveContainer" containerID="c4c4ebf5346373705514add06e68c4e0277719ed17a240b284ea32573bdbb334" Feb 02 11:27:27 crc kubenswrapper[4925]: I0202 11:27:27.269045 4925 scope.go:117] "RemoveContainer" containerID="007227535bdf2c53462059f2b1f0a94457acba5fee127eb94bd29005c837b34d" Feb 02 11:27:27 crc kubenswrapper[4925]: I0202 11:27:27.289564 4925 scope.go:117] "RemoveContainer" containerID="ef83eeb42f23f8ad93df1ed6b1afc34c5562730eba18eaed7322434108d17424" Feb 02 11:27:27 crc kubenswrapper[4925]: I0202 11:27:27.311172 4925 scope.go:117] "RemoveContainer" containerID="1f7a8a364bd2ddcd81af382722407f6853456a34c64e5002cf9943aefcebf439" Feb 02 11:27:28 crc kubenswrapper[4925]: I0202 11:27:28.081938 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" event={"ID":"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae","Type":"ContainerStarted","Data":"dfdbd393999fcbb5136339f541a2102380216c7c05b9c0bcc026b3bd82abceee"} Feb 02 11:27:28 crc kubenswrapper[4925]: I0202 11:27:28.097413 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" podStartSLOduration=2.5076047089999998 podStartE2EDuration="3.097390263s" podCreationTimestamp="2026-02-02 11:27:25 +0000 UTC" firstStartedPulling="2026-02-02 11:27:26.55277981 +0000 UTC m=+1823.557028772" lastFinishedPulling="2026-02-02 11:27:27.142565364 +0000 UTC m=+1824.146814326" observedRunningTime="2026-02-02 11:27:28.092934773 +0000 UTC m=+1825.097183755" watchObservedRunningTime="2026-02-02 11:27:28.097390263 +0000 UTC m=+1825.101639225" Feb 02 11:27:32 crc kubenswrapper[4925]: I0202 11:27:32.119661 4925 generic.go:334] "Generic (PLEG): container finished" podID="987e20ae-ea4c-4754-9bd2-9dcb4fda76ae" containerID="dfdbd393999fcbb5136339f541a2102380216c7c05b9c0bcc026b3bd82abceee" exitCode=0 Feb 02 11:27:32 crc kubenswrapper[4925]: I0202 11:27:32.119769 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" event={"ID":"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae","Type":"ContainerDied","Data":"dfdbd393999fcbb5136339f541a2102380216c7c05b9c0bcc026b3bd82abceee"} Feb 02 11:27:33 crc kubenswrapper[4925]: I0202 11:27:33.506091 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" Feb 02 11:27:33 crc kubenswrapper[4925]: I0202 11:27:33.634344 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcb4r\" (UniqueName: \"kubernetes.io/projected/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-kube-api-access-xcb4r\") pod \"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae\" (UID: \"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae\") " Feb 02 11:27:33 crc kubenswrapper[4925]: I0202 11:27:33.634413 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-ssh-key-openstack-edpm-ipam\") pod \"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae\" (UID: \"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae\") " Feb 02 11:27:33 crc kubenswrapper[4925]: I0202 11:27:33.634663 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-inventory\") pod \"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae\" (UID: \"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae\") " Feb 02 11:27:33 crc kubenswrapper[4925]: I0202 11:27:33.641146 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-kube-api-access-xcb4r" (OuterVolumeSpecName: "kube-api-access-xcb4r") pod "987e20ae-ea4c-4754-9bd2-9dcb4fda76ae" (UID: "987e20ae-ea4c-4754-9bd2-9dcb4fda76ae"). InnerVolumeSpecName "kube-api-access-xcb4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:27:33 crc kubenswrapper[4925]: I0202 11:27:33.663014 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-inventory" (OuterVolumeSpecName: "inventory") pod "987e20ae-ea4c-4754-9bd2-9dcb4fda76ae" (UID: "987e20ae-ea4c-4754-9bd2-9dcb4fda76ae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:27:33 crc kubenswrapper[4925]: I0202 11:27:33.664218 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "987e20ae-ea4c-4754-9bd2-9dcb4fda76ae" (UID: "987e20ae-ea4c-4754-9bd2-9dcb4fda76ae"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:27:33 crc kubenswrapper[4925]: I0202 11:27:33.737096 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:27:33 crc kubenswrapper[4925]: I0202 11:27:33.737127 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcb4r\" (UniqueName: \"kubernetes.io/projected/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-kube-api-access-xcb4r\") on node \"crc\" DevicePath \"\"" Feb 02 11:27:33 crc kubenswrapper[4925]: I0202 11:27:33.737137 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.138953 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" event={"ID":"987e20ae-ea4c-4754-9bd2-9dcb4fda76ae","Type":"ContainerDied","Data":"70e07c970b35d63e13bad4d2e25726c03af85f24d36c2e6269618da5da587c2f"} Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.139008 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70e07c970b35d63e13bad4d2e25726c03af85f24d36c2e6269618da5da587c2f" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.139107 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.206999 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf"] Feb 02 11:27:34 crc kubenswrapper[4925]: E0202 11:27:34.207481 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="987e20ae-ea4c-4754-9bd2-9dcb4fda76ae" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.207509 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="987e20ae-ea4c-4754-9bd2-9dcb4fda76ae" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.207756 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="987e20ae-ea4c-4754-9bd2-9dcb4fda76ae" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.208601 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.210445 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.210520 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.211541 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.211829 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.219864 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf"] Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.244755 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgvzg\" (UniqueName: \"kubernetes.io/projected/e14ed962-961f-47b9-8694-880007d9538f-kube-api-access-dgvzg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t26jf\" (UID: \"e14ed962-961f-47b9-8694-880007d9538f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.244905 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e14ed962-961f-47b9-8694-880007d9538f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t26jf\" (UID: \"e14ed962-961f-47b9-8694-880007d9538f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.244961 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e14ed962-961f-47b9-8694-880007d9538f-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t26jf\" (UID: \"e14ed962-961f-47b9-8694-880007d9538f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.345892 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e14ed962-961f-47b9-8694-880007d9538f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t26jf\" (UID: \"e14ed962-961f-47b9-8694-880007d9538f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.346196 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e14ed962-961f-47b9-8694-880007d9538f-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t26jf\" (UID: \"e14ed962-961f-47b9-8694-880007d9538f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.346247 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgvzg\" (UniqueName: \"kubernetes.io/projected/e14ed962-961f-47b9-8694-880007d9538f-kube-api-access-dgvzg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t26jf\" (UID: \"e14ed962-961f-47b9-8694-880007d9538f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.350061 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e14ed962-961f-47b9-8694-880007d9538f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t26jf\" (UID: \"e14ed962-961f-47b9-8694-880007d9538f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.350979 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e14ed962-961f-47b9-8694-880007d9538f-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t26jf\" (UID: \"e14ed962-961f-47b9-8694-880007d9538f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.364198 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgvzg\" (UniqueName: \"kubernetes.io/projected/e14ed962-961f-47b9-8694-880007d9538f-kube-api-access-dgvzg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t26jf\" (UID: \"e14ed962-961f-47b9-8694-880007d9538f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" Feb 02 11:27:34 crc kubenswrapper[4925]: I0202 11:27:34.534569 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" Feb 02 11:27:35 crc kubenswrapper[4925]: I0202 11:27:35.055131 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf"] Feb 02 11:27:35 crc kubenswrapper[4925]: I0202 11:27:35.149049 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" event={"ID":"e14ed962-961f-47b9-8694-880007d9538f","Type":"ContainerStarted","Data":"cbf921997a3e58ed7717817af7f5016d177b58d50a4d8135e35ec0101c9726dd"} Feb 02 11:27:37 crc kubenswrapper[4925]: I0202 11:27:37.166360 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" event={"ID":"e14ed962-961f-47b9-8694-880007d9538f","Type":"ContainerStarted","Data":"48b033738c2f9271d217b8a100fbde866a7a91bfccbfba56b6f9212a13e3996d"} Feb 02 11:27:37 crc kubenswrapper[4925]: I0202 11:27:37.193737 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" podStartSLOduration=1.746415658 podStartE2EDuration="3.193703359s" podCreationTimestamp="2026-02-02 11:27:34 +0000 UTC" firstStartedPulling="2026-02-02 11:27:35.060599817 +0000 UTC m=+1832.064848779" lastFinishedPulling="2026-02-02 11:27:36.507887508 +0000 UTC m=+1833.512136480" observedRunningTime="2026-02-02 11:27:37.185263163 +0000 UTC m=+1834.189512125" watchObservedRunningTime="2026-02-02 11:27:37.193703359 +0000 UTC m=+1834.197952361" Feb 02 11:27:40 crc kubenswrapper[4925]: I0202 11:27:40.665072 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:27:40 crc kubenswrapper[4925]: E0202 11:27:40.666068 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:27:53 crc kubenswrapper[4925]: I0202 11:27:53.664440 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:27:53 crc kubenswrapper[4925]: E0202 11:27:53.665353 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.059978 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-da80-account-create-update-2hpp2"] Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.070704 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-rx9rv"] Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.082179 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-b379-account-create-update-kqh5k"] Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.097785 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xmpnn"] Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.107269 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-rx9rv"] Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.114243 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-rdb26"] Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.122995 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-rdb26"] Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.130954 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-da80-account-create-update-2hpp2"] Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.139449 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-rclxc"] Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.164469 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c6e8-account-create-update-s5hwt"] Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.171130 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xmpnn"] Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.180197 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-b379-account-create-update-kqh5k"] Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.185094 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-rclxc"] Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.191333 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c6e8-account-create-update-s5hwt"] Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.674368 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cdfffad-8b9e-41d8-b2da-a72d965d36a0" path="/var/lib/kubelet/pods/2cdfffad-8b9e-41d8-b2da-a72d965d36a0/volumes" Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.675180 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53fefe30-82c4-4d41-9af2-c23e671ce91e" path="/var/lib/kubelet/pods/53fefe30-82c4-4d41-9af2-c23e671ce91e/volumes" Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.675751 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5826e7fc-d781-46da-a2dc-09baa99ab163" path="/var/lib/kubelet/pods/5826e7fc-d781-46da-a2dc-09baa99ab163/volumes" Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.676417 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0cf93f3-efad-4435-820a-f9b4631c1efd" path="/var/lib/kubelet/pods/c0cf93f3-efad-4435-820a-f9b4631c1efd/volumes" Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.677407 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c" path="/var/lib/kubelet/pods/cc1c4638-cbe1-4bd1-8e50-dfb0aa7f069c/volumes" Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.677909 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f03ccf8f-b9cc-4352-8c6b-7a5705807701" path="/var/lib/kubelet/pods/f03ccf8f-b9cc-4352-8c6b-7a5705807701/volumes" Feb 02 11:27:54 crc kubenswrapper[4925]: I0202 11:27:54.678439 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fab8fa26-070e-4295-bbf4-b44a994f2ba0" path="/var/lib/kubelet/pods/fab8fa26-070e-4295-bbf4-b44a994f2ba0/volumes" Feb 02 11:28:04 crc kubenswrapper[4925]: I0202 11:28:04.029223 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-rvhtj"] Feb 02 11:28:04 crc kubenswrapper[4925]: I0202 11:28:04.043104 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-rvhtj"] Feb 02 11:28:04 crc kubenswrapper[4925]: I0202 11:28:04.674546 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de3b7878-e41d-4ea8-b866-f34a48455d29" path="/var/lib/kubelet/pods/de3b7878-e41d-4ea8-b866-f34a48455d29/volumes" Feb 02 11:28:08 crc kubenswrapper[4925]: I0202 11:28:08.664840 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:28:08 crc kubenswrapper[4925]: E0202 11:28:08.665526 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:28:12 crc kubenswrapper[4925]: I0202 11:28:12.448568 4925 generic.go:334] "Generic (PLEG): container finished" podID="e14ed962-961f-47b9-8694-880007d9538f" containerID="48b033738c2f9271d217b8a100fbde866a7a91bfccbfba56b6f9212a13e3996d" exitCode=0 Feb 02 11:28:12 crc kubenswrapper[4925]: I0202 11:28:12.448675 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" event={"ID":"e14ed962-961f-47b9-8694-880007d9538f","Type":"ContainerDied","Data":"48b033738c2f9271d217b8a100fbde866a7a91bfccbfba56b6f9212a13e3996d"} Feb 02 11:28:13 crc kubenswrapper[4925]: I0202 11:28:13.827067 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" Feb 02 11:28:13 crc kubenswrapper[4925]: I0202 11:28:13.933009 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e14ed962-961f-47b9-8694-880007d9538f-inventory\") pod \"e14ed962-961f-47b9-8694-880007d9538f\" (UID: \"e14ed962-961f-47b9-8694-880007d9538f\") " Feb 02 11:28:13 crc kubenswrapper[4925]: I0202 11:28:13.933243 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgvzg\" (UniqueName: \"kubernetes.io/projected/e14ed962-961f-47b9-8694-880007d9538f-kube-api-access-dgvzg\") pod \"e14ed962-961f-47b9-8694-880007d9538f\" (UID: \"e14ed962-961f-47b9-8694-880007d9538f\") " Feb 02 11:28:13 crc kubenswrapper[4925]: I0202 11:28:13.933368 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e14ed962-961f-47b9-8694-880007d9538f-ssh-key-openstack-edpm-ipam\") pod \"e14ed962-961f-47b9-8694-880007d9538f\" (UID: \"e14ed962-961f-47b9-8694-880007d9538f\") " Feb 02 11:28:13 crc kubenswrapper[4925]: I0202 11:28:13.939407 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e14ed962-961f-47b9-8694-880007d9538f-kube-api-access-dgvzg" (OuterVolumeSpecName: "kube-api-access-dgvzg") pod "e14ed962-961f-47b9-8694-880007d9538f" (UID: "e14ed962-961f-47b9-8694-880007d9538f"). InnerVolumeSpecName "kube-api-access-dgvzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:28:13 crc kubenswrapper[4925]: I0202 11:28:13.957285 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14ed962-961f-47b9-8694-880007d9538f-inventory" (OuterVolumeSpecName: "inventory") pod "e14ed962-961f-47b9-8694-880007d9538f" (UID: "e14ed962-961f-47b9-8694-880007d9538f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:28:13 crc kubenswrapper[4925]: I0202 11:28:13.969681 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14ed962-961f-47b9-8694-880007d9538f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e14ed962-961f-47b9-8694-880007d9538f" (UID: "e14ed962-961f-47b9-8694-880007d9538f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.035312 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e14ed962-961f-47b9-8694-880007d9538f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.035348 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e14ed962-961f-47b9-8694-880007d9538f-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.035360 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgvzg\" (UniqueName: \"kubernetes.io/projected/e14ed962-961f-47b9-8694-880007d9538f-kube-api-access-dgvzg\") on node \"crc\" DevicePath \"\"" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.475974 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" event={"ID":"e14ed962-961f-47b9-8694-880007d9538f","Type":"ContainerDied","Data":"cbf921997a3e58ed7717817af7f5016d177b58d50a4d8135e35ec0101c9726dd"} Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.476021 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbf921997a3e58ed7717817af7f5016d177b58d50a4d8135e35ec0101c9726dd" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.476061 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.544547 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w"] Feb 02 11:28:14 crc kubenswrapper[4925]: E0202 11:28:14.544920 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14ed962-961f-47b9-8694-880007d9538f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.544938 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14ed962-961f-47b9-8694-880007d9538f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.545187 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14ed962-961f-47b9-8694-880007d9538f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.545766 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.551512 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.551641 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.551717 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.551941 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.552955 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w"] Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.645000 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdf98756-eb42-441f-ac98-d877ebd79a9a-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w\" (UID: \"bdf98756-eb42-441f-ac98-d877ebd79a9a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.645117 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdf98756-eb42-441f-ac98-d877ebd79a9a-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w\" (UID: \"bdf98756-eb42-441f-ac98-d877ebd79a9a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.645199 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk6tz\" (UniqueName: \"kubernetes.io/projected/bdf98756-eb42-441f-ac98-d877ebd79a9a-kube-api-access-lk6tz\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w\" (UID: \"bdf98756-eb42-441f-ac98-d877ebd79a9a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.746440 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdf98756-eb42-441f-ac98-d877ebd79a9a-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w\" (UID: \"bdf98756-eb42-441f-ac98-d877ebd79a9a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.746518 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdf98756-eb42-441f-ac98-d877ebd79a9a-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w\" (UID: \"bdf98756-eb42-441f-ac98-d877ebd79a9a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.747778 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk6tz\" (UniqueName: \"kubernetes.io/projected/bdf98756-eb42-441f-ac98-d877ebd79a9a-kube-api-access-lk6tz\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w\" (UID: \"bdf98756-eb42-441f-ac98-d877ebd79a9a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.751360 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdf98756-eb42-441f-ac98-d877ebd79a9a-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w\" (UID: \"bdf98756-eb42-441f-ac98-d877ebd79a9a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.751785 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdf98756-eb42-441f-ac98-d877ebd79a9a-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w\" (UID: \"bdf98756-eb42-441f-ac98-d877ebd79a9a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.769162 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk6tz\" (UniqueName: \"kubernetes.io/projected/bdf98756-eb42-441f-ac98-d877ebd79a9a-kube-api-access-lk6tz\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w\" (UID: \"bdf98756-eb42-441f-ac98-d877ebd79a9a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w" Feb 02 11:28:14 crc kubenswrapper[4925]: I0202 11:28:14.871312 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w" Feb 02 11:28:15 crc kubenswrapper[4925]: I0202 11:28:15.411892 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w"] Feb 02 11:28:15 crc kubenswrapper[4925]: I0202 11:28:15.484263 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w" event={"ID":"bdf98756-eb42-441f-ac98-d877ebd79a9a","Type":"ContainerStarted","Data":"b80243db10c9b8c9be094b646acc597b3a65952a27b77363dcc76bb126d0150e"} Feb 02 11:28:16 crc kubenswrapper[4925]: I0202 11:28:16.493275 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w" event={"ID":"bdf98756-eb42-441f-ac98-d877ebd79a9a","Type":"ContainerStarted","Data":"4b82a9cc75b1b621942c6328c7ad5136df62b758c4bd5df9f1c719b40acea912"} Feb 02 11:28:21 crc kubenswrapper[4925]: I0202 11:28:21.536813 4925 generic.go:334] "Generic (PLEG): container finished" podID="bdf98756-eb42-441f-ac98-d877ebd79a9a" containerID="4b82a9cc75b1b621942c6328c7ad5136df62b758c4bd5df9f1c719b40acea912" exitCode=0 Feb 02 11:28:21 crc kubenswrapper[4925]: I0202 11:28:21.536906 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w" event={"ID":"bdf98756-eb42-441f-ac98-d877ebd79a9a","Type":"ContainerDied","Data":"4b82a9cc75b1b621942c6328c7ad5136df62b758c4bd5df9f1c719b40acea912"} Feb 02 11:28:22 crc kubenswrapper[4925]: I0202 11:28:22.956866 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.101639 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk6tz\" (UniqueName: \"kubernetes.io/projected/bdf98756-eb42-441f-ac98-d877ebd79a9a-kube-api-access-lk6tz\") pod \"bdf98756-eb42-441f-ac98-d877ebd79a9a\" (UID: \"bdf98756-eb42-441f-ac98-d877ebd79a9a\") " Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.101743 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdf98756-eb42-441f-ac98-d877ebd79a9a-ssh-key-openstack-edpm-ipam\") pod \"bdf98756-eb42-441f-ac98-d877ebd79a9a\" (UID: \"bdf98756-eb42-441f-ac98-d877ebd79a9a\") " Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.101864 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdf98756-eb42-441f-ac98-d877ebd79a9a-inventory\") pod \"bdf98756-eb42-441f-ac98-d877ebd79a9a\" (UID: \"bdf98756-eb42-441f-ac98-d877ebd79a9a\") " Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.106964 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf98756-eb42-441f-ac98-d877ebd79a9a-kube-api-access-lk6tz" (OuterVolumeSpecName: "kube-api-access-lk6tz") pod "bdf98756-eb42-441f-ac98-d877ebd79a9a" (UID: "bdf98756-eb42-441f-ac98-d877ebd79a9a"). InnerVolumeSpecName "kube-api-access-lk6tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.127297 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf98756-eb42-441f-ac98-d877ebd79a9a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bdf98756-eb42-441f-ac98-d877ebd79a9a" (UID: "bdf98756-eb42-441f-ac98-d877ebd79a9a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.129929 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf98756-eb42-441f-ac98-d877ebd79a9a-inventory" (OuterVolumeSpecName: "inventory") pod "bdf98756-eb42-441f-ac98-d877ebd79a9a" (UID: "bdf98756-eb42-441f-ac98-d877ebd79a9a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.204117 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk6tz\" (UniqueName: \"kubernetes.io/projected/bdf98756-eb42-441f-ac98-d877ebd79a9a-kube-api-access-lk6tz\") on node \"crc\" DevicePath \"\"" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.204447 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdf98756-eb42-441f-ac98-d877ebd79a9a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.204464 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdf98756-eb42-441f-ac98-d877ebd79a9a-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.557599 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w" event={"ID":"bdf98756-eb42-441f-ac98-d877ebd79a9a","Type":"ContainerDied","Data":"b80243db10c9b8c9be094b646acc597b3a65952a27b77363dcc76bb126d0150e"} Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.557643 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b80243db10c9b8c9be094b646acc597b3a65952a27b77363dcc76bb126d0150e" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.557959 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.627908 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8"] Feb 02 11:28:23 crc kubenswrapper[4925]: E0202 11:28:23.628325 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf98756-eb42-441f-ac98-d877ebd79a9a" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.628347 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf98756-eb42-441f-ac98-d877ebd79a9a" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.628541 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf98756-eb42-441f-ac98-d877ebd79a9a" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.629191 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.630955 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.631234 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.633486 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.641319 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.648549 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8"] Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.664611 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:28:23 crc kubenswrapper[4925]: E0202 11:28:23.664944 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.712613 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d93fef7-4ec8-456f-a138-40b0175ce0ce-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8\" (UID: \"6d93fef7-4ec8-456f-a138-40b0175ce0ce\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.712676 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d93fef7-4ec8-456f-a138-40b0175ce0ce-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8\" (UID: \"6d93fef7-4ec8-456f-a138-40b0175ce0ce\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.712987 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-787nh\" (UniqueName: \"kubernetes.io/projected/6d93fef7-4ec8-456f-a138-40b0175ce0ce-kube-api-access-787nh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8\" (UID: \"6d93fef7-4ec8-456f-a138-40b0175ce0ce\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.815325 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-787nh\" (UniqueName: \"kubernetes.io/projected/6d93fef7-4ec8-456f-a138-40b0175ce0ce-kube-api-access-787nh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8\" (UID: \"6d93fef7-4ec8-456f-a138-40b0175ce0ce\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.815398 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d93fef7-4ec8-456f-a138-40b0175ce0ce-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8\" (UID: \"6d93fef7-4ec8-456f-a138-40b0175ce0ce\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.815450 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d93fef7-4ec8-456f-a138-40b0175ce0ce-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8\" (UID: \"6d93fef7-4ec8-456f-a138-40b0175ce0ce\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.819155 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d93fef7-4ec8-456f-a138-40b0175ce0ce-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8\" (UID: \"6d93fef7-4ec8-456f-a138-40b0175ce0ce\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.819806 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d93fef7-4ec8-456f-a138-40b0175ce0ce-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8\" (UID: \"6d93fef7-4ec8-456f-a138-40b0175ce0ce\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.860305 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-787nh\" (UniqueName: \"kubernetes.io/projected/6d93fef7-4ec8-456f-a138-40b0175ce0ce-kube-api-access-787nh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8\" (UID: \"6d93fef7-4ec8-456f-a138-40b0175ce0ce\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" Feb 02 11:28:23 crc kubenswrapper[4925]: I0202 11:28:23.946637 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" Feb 02 11:28:24 crc kubenswrapper[4925]: I0202 11:28:24.407663 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8"] Feb 02 11:28:24 crc kubenswrapper[4925]: I0202 11:28:24.587213 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" event={"ID":"6d93fef7-4ec8-456f-a138-40b0175ce0ce","Type":"ContainerStarted","Data":"1af2352bc35984296a5d7a2c8cc61479534a15906c7ae4fa02c929c376f26d06"} Feb 02 11:28:25 crc kubenswrapper[4925]: I0202 11:28:25.611300 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" event={"ID":"6d93fef7-4ec8-456f-a138-40b0175ce0ce","Type":"ContainerStarted","Data":"99dca2ab3bd137e6df78f9199328230df4fb85ab96970a902ad9f06da21417e1"} Feb 02 11:28:25 crc kubenswrapper[4925]: I0202 11:28:25.646666 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" podStartSLOduration=2.2044252 podStartE2EDuration="2.646631785s" podCreationTimestamp="2026-02-02 11:28:23 +0000 UTC" firstStartedPulling="2026-02-02 11:28:24.422559902 +0000 UTC m=+1881.426808864" lastFinishedPulling="2026-02-02 11:28:24.864766487 +0000 UTC m=+1881.869015449" observedRunningTime="2026-02-02 11:28:25.633829861 +0000 UTC m=+1882.638078823" watchObservedRunningTime="2026-02-02 11:28:25.646631785 +0000 UTC m=+1882.650880747" Feb 02 11:28:27 crc kubenswrapper[4925]: I0202 11:28:27.446404 4925 scope.go:117] "RemoveContainer" containerID="b0a56371d541f3801251455bc7b78eb139c75200719877c714f4ad75dc0ed4f5" Feb 02 11:28:27 crc kubenswrapper[4925]: I0202 11:28:27.473151 4925 scope.go:117] "RemoveContainer" containerID="40c3bb36d0e8a5e72bafd41685ad46c3560463f3770b1fe47dd3fe03b210f492" Feb 02 11:28:27 crc kubenswrapper[4925]: I0202 11:28:27.507744 4925 scope.go:117] "RemoveContainer" containerID="88d9e3c632b7c12348d3057723c79346bf68e8dc1a16d12b97e97c729a28160b" Feb 02 11:28:27 crc kubenswrapper[4925]: I0202 11:28:27.576284 4925 scope.go:117] "RemoveContainer" containerID="be348d929c12884d0f657b05b4c4543e50183126b26a9cfd25fb754a10960370" Feb 02 11:28:27 crc kubenswrapper[4925]: I0202 11:28:27.599961 4925 scope.go:117] "RemoveContainer" containerID="451dd8f6eec86e08c08cfb9a9a4dbadb1ba68945cf87ce057b9d8137ee776837" Feb 02 11:28:27 crc kubenswrapper[4925]: I0202 11:28:27.639698 4925 scope.go:117] "RemoveContainer" containerID="b899db659ca08d16b61d5f09b3f8cc9e00e9e475976198674ce67baa0591334f" Feb 02 11:28:27 crc kubenswrapper[4925]: I0202 11:28:27.687352 4925 scope.go:117] "RemoveContainer" containerID="903ce1f065f896201bddf4ca3c2c292be4b924b8b44d523ce4c4182008d9503d" Feb 02 11:28:27 crc kubenswrapper[4925]: I0202 11:28:27.720756 4925 scope.go:117] "RemoveContainer" containerID="766960461a2916d9e4721732fa0d331e5609fb70f6a8265acc6b4e4afd702a34" Feb 02 11:28:34 crc kubenswrapper[4925]: I0202 11:28:34.670951 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:28:34 crc kubenswrapper[4925]: E0202 11:28:34.671644 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:28:42 crc kubenswrapper[4925]: I0202 11:28:42.052820 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nzrtg"] Feb 02 11:28:42 crc kubenswrapper[4925]: I0202 11:28:42.061059 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nzrtg"] Feb 02 11:28:42 crc kubenswrapper[4925]: I0202 11:28:42.676384 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d7c58b-1da7-468e-98a5-910467641690" path="/var/lib/kubelet/pods/b4d7c58b-1da7-468e-98a5-910467641690/volumes" Feb 02 11:28:49 crc kubenswrapper[4925]: I0202 11:28:49.664061 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:28:49 crc kubenswrapper[4925]: E0202 11:28:49.664636 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:29:01 crc kubenswrapper[4925]: I0202 11:29:01.664329 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:29:01 crc kubenswrapper[4925]: E0202 11:29:01.665297 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:29:10 crc kubenswrapper[4925]: I0202 11:29:10.977199 4925 generic.go:334] "Generic (PLEG): container finished" podID="6d93fef7-4ec8-456f-a138-40b0175ce0ce" containerID="99dca2ab3bd137e6df78f9199328230df4fb85ab96970a902ad9f06da21417e1" exitCode=0 Feb 02 11:29:10 crc kubenswrapper[4925]: I0202 11:29:10.977289 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" event={"ID":"6d93fef7-4ec8-456f-a138-40b0175ce0ce","Type":"ContainerDied","Data":"99dca2ab3bd137e6df78f9199328230df4fb85ab96970a902ad9f06da21417e1"} Feb 02 11:29:12 crc kubenswrapper[4925]: I0202 11:29:12.346678 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" Feb 02 11:29:12 crc kubenswrapper[4925]: I0202 11:29:12.480592 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d93fef7-4ec8-456f-a138-40b0175ce0ce-inventory\") pod \"6d93fef7-4ec8-456f-a138-40b0175ce0ce\" (UID: \"6d93fef7-4ec8-456f-a138-40b0175ce0ce\") " Feb 02 11:29:12 crc kubenswrapper[4925]: I0202 11:29:12.480713 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d93fef7-4ec8-456f-a138-40b0175ce0ce-ssh-key-openstack-edpm-ipam\") pod \"6d93fef7-4ec8-456f-a138-40b0175ce0ce\" (UID: \"6d93fef7-4ec8-456f-a138-40b0175ce0ce\") " Feb 02 11:29:12 crc kubenswrapper[4925]: I0202 11:29:12.480772 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-787nh\" (UniqueName: \"kubernetes.io/projected/6d93fef7-4ec8-456f-a138-40b0175ce0ce-kube-api-access-787nh\") pod \"6d93fef7-4ec8-456f-a138-40b0175ce0ce\" (UID: \"6d93fef7-4ec8-456f-a138-40b0175ce0ce\") " Feb 02 11:29:12 crc kubenswrapper[4925]: I0202 11:29:12.485869 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d93fef7-4ec8-456f-a138-40b0175ce0ce-kube-api-access-787nh" (OuterVolumeSpecName: "kube-api-access-787nh") pod "6d93fef7-4ec8-456f-a138-40b0175ce0ce" (UID: "6d93fef7-4ec8-456f-a138-40b0175ce0ce"). InnerVolumeSpecName "kube-api-access-787nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:12 crc kubenswrapper[4925]: I0202 11:29:12.506265 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d93fef7-4ec8-456f-a138-40b0175ce0ce-inventory" (OuterVolumeSpecName: "inventory") pod "6d93fef7-4ec8-456f-a138-40b0175ce0ce" (UID: "6d93fef7-4ec8-456f-a138-40b0175ce0ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:12 crc kubenswrapper[4925]: I0202 11:29:12.511655 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d93fef7-4ec8-456f-a138-40b0175ce0ce-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6d93fef7-4ec8-456f-a138-40b0175ce0ce" (UID: "6d93fef7-4ec8-456f-a138-40b0175ce0ce"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:12 crc kubenswrapper[4925]: I0202 11:29:12.582676 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-787nh\" (UniqueName: \"kubernetes.io/projected/6d93fef7-4ec8-456f-a138-40b0175ce0ce-kube-api-access-787nh\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:12 crc kubenswrapper[4925]: I0202 11:29:12.582719 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d93fef7-4ec8-456f-a138-40b0175ce0ce-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:12 crc kubenswrapper[4925]: I0202 11:29:12.582730 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d93fef7-4ec8-456f-a138-40b0175ce0ce-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:12 crc kubenswrapper[4925]: I0202 11:29:12.997419 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" event={"ID":"6d93fef7-4ec8-456f-a138-40b0175ce0ce","Type":"ContainerDied","Data":"1af2352bc35984296a5d7a2c8cc61479534a15906c7ae4fa02c929c376f26d06"} Feb 02 11:29:12 crc kubenswrapper[4925]: I0202 11:29:12.997873 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1af2352bc35984296a5d7a2c8cc61479534a15906c7ae4fa02c929c376f26d06" Feb 02 11:29:12 crc kubenswrapper[4925]: I0202 11:29:12.997470 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.070447 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-h5hp9"] Feb 02 11:29:13 crc kubenswrapper[4925]: E0202 11:29:13.071118 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d93fef7-4ec8-456f-a138-40b0175ce0ce" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.071146 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d93fef7-4ec8-456f-a138-40b0175ce0ce" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.071357 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d93fef7-4ec8-456f-a138-40b0175ce0ce" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.072364 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.074458 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.074513 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.077724 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.078069 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.101264 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-h5hp9"] Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.196146 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-h5hp9\" (UID: \"24d60fb3-cbb9-4272-b6e8-9d31d0124e49\") " pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.196308 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4p8q\" (UniqueName: \"kubernetes.io/projected/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-kube-api-access-g4p8q\") pod \"ssh-known-hosts-edpm-deployment-h5hp9\" (UID: \"24d60fb3-cbb9-4272-b6e8-9d31d0124e49\") " pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.196517 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-h5hp9\" (UID: \"24d60fb3-cbb9-4272-b6e8-9d31d0124e49\") " pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.297971 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-h5hp9\" (UID: \"24d60fb3-cbb9-4272-b6e8-9d31d0124e49\") " pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.298033 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4p8q\" (UniqueName: \"kubernetes.io/projected/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-kube-api-access-g4p8q\") pod \"ssh-known-hosts-edpm-deployment-h5hp9\" (UID: \"24d60fb3-cbb9-4272-b6e8-9d31d0124e49\") " pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.298113 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-h5hp9\" (UID: \"24d60fb3-cbb9-4272-b6e8-9d31d0124e49\") " pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.302856 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-h5hp9\" (UID: \"24d60fb3-cbb9-4272-b6e8-9d31d0124e49\") " pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.313054 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-h5hp9\" (UID: \"24d60fb3-cbb9-4272-b6e8-9d31d0124e49\") " pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.319517 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4p8q\" (UniqueName: \"kubernetes.io/projected/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-kube-api-access-g4p8q\") pod \"ssh-known-hosts-edpm-deployment-h5hp9\" (UID: \"24d60fb3-cbb9-4272-b6e8-9d31d0124e49\") " pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.405433 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.938575 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-h5hp9"] Feb 02 11:29:13 crc kubenswrapper[4925]: I0202 11:29:13.951861 4925 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:29:14 crc kubenswrapper[4925]: I0202 11:29:14.006612 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" event={"ID":"24d60fb3-cbb9-4272-b6e8-9d31d0124e49","Type":"ContainerStarted","Data":"388f0fe8f7c626c5101b7198c879b52d0945a802c5e50f2006fa72784a00bd56"} Feb 02 11:29:15 crc kubenswrapper[4925]: I0202 11:29:15.016746 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" event={"ID":"24d60fb3-cbb9-4272-b6e8-9d31d0124e49","Type":"ContainerStarted","Data":"9c8984bec5a25dcbef71beed5be03dcd4d44d475ef64a72d759f16f78a49d3da"} Feb 02 11:29:15 crc kubenswrapper[4925]: I0202 11:29:15.047782 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" podStartSLOduration=1.606923263 podStartE2EDuration="2.047759421s" podCreationTimestamp="2026-02-02 11:29:13 +0000 UTC" firstStartedPulling="2026-02-02 11:29:13.951652172 +0000 UTC m=+1930.955901134" lastFinishedPulling="2026-02-02 11:29:14.39248833 +0000 UTC m=+1931.396737292" observedRunningTime="2026-02-02 11:29:15.044317219 +0000 UTC m=+1932.048566181" watchObservedRunningTime="2026-02-02 11:29:15.047759421 +0000 UTC m=+1932.052008403" Feb 02 11:29:16 crc kubenswrapper[4925]: I0202 11:29:16.664790 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:29:17 crc kubenswrapper[4925]: I0202 11:29:17.033885 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"32e3054c374b3ba80c40613fe344e538f1d343435befd2568b4687e277c37ce4"} Feb 02 11:29:22 crc kubenswrapper[4925]: I0202 11:29:22.076487 4925 generic.go:334] "Generic (PLEG): container finished" podID="24d60fb3-cbb9-4272-b6e8-9d31d0124e49" containerID="9c8984bec5a25dcbef71beed5be03dcd4d44d475ef64a72d759f16f78a49d3da" exitCode=0 Feb 02 11:29:22 crc kubenswrapper[4925]: I0202 11:29:22.076612 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" event={"ID":"24d60fb3-cbb9-4272-b6e8-9d31d0124e49","Type":"ContainerDied","Data":"9c8984bec5a25dcbef71beed5be03dcd4d44d475ef64a72d759f16f78a49d3da"} Feb 02 11:29:23 crc kubenswrapper[4925]: I0202 11:29:23.487191 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" Feb 02 11:29:23 crc kubenswrapper[4925]: I0202 11:29:23.580839 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4p8q\" (UniqueName: \"kubernetes.io/projected/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-kube-api-access-g4p8q\") pod \"24d60fb3-cbb9-4272-b6e8-9d31d0124e49\" (UID: \"24d60fb3-cbb9-4272-b6e8-9d31d0124e49\") " Feb 02 11:29:23 crc kubenswrapper[4925]: I0202 11:29:23.581061 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-ssh-key-openstack-edpm-ipam\") pod \"24d60fb3-cbb9-4272-b6e8-9d31d0124e49\" (UID: \"24d60fb3-cbb9-4272-b6e8-9d31d0124e49\") " Feb 02 11:29:23 crc kubenswrapper[4925]: I0202 11:29:23.581171 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-inventory-0\") pod \"24d60fb3-cbb9-4272-b6e8-9d31d0124e49\" (UID: \"24d60fb3-cbb9-4272-b6e8-9d31d0124e49\") " Feb 02 11:29:23 crc kubenswrapper[4925]: I0202 11:29:23.588563 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-kube-api-access-g4p8q" (OuterVolumeSpecName: "kube-api-access-g4p8q") pod "24d60fb3-cbb9-4272-b6e8-9d31d0124e49" (UID: "24d60fb3-cbb9-4272-b6e8-9d31d0124e49"). InnerVolumeSpecName "kube-api-access-g4p8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:23 crc kubenswrapper[4925]: I0202 11:29:23.608795 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "24d60fb3-cbb9-4272-b6e8-9d31d0124e49" (UID: "24d60fb3-cbb9-4272-b6e8-9d31d0124e49"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:23 crc kubenswrapper[4925]: I0202 11:29:23.627276 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "24d60fb3-cbb9-4272-b6e8-9d31d0124e49" (UID: "24d60fb3-cbb9-4272-b6e8-9d31d0124e49"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:23 crc kubenswrapper[4925]: I0202 11:29:23.683119 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:23 crc kubenswrapper[4925]: I0202 11:29:23.683170 4925 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:23 crc kubenswrapper[4925]: I0202 11:29:23.683184 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4p8q\" (UniqueName: \"kubernetes.io/projected/24d60fb3-cbb9-4272-b6e8-9d31d0124e49-kube-api-access-g4p8q\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.099846 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" event={"ID":"24d60fb3-cbb9-4272-b6e8-9d31d0124e49","Type":"ContainerDied","Data":"388f0fe8f7c626c5101b7198c879b52d0945a802c5e50f2006fa72784a00bd56"} Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.100168 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="388f0fe8f7c626c5101b7198c879b52d0945a802c5e50f2006fa72784a00bd56" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.099943 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-h5hp9" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.176179 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9"] Feb 02 11:29:24 crc kubenswrapper[4925]: E0202 11:29:24.176644 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d60fb3-cbb9-4272-b6e8-9d31d0124e49" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.176666 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d60fb3-cbb9-4272-b6e8-9d31d0124e49" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.176835 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d60fb3-cbb9-4272-b6e8-9d31d0124e49" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.177531 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.181141 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.181306 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.181404 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.181580 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.200733 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9"] Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.293823 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nvzd9\" (UID: \"da8b04f3-a31b-4ce5-b798-73bb52adb2bb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.293903 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rtd2\" (UniqueName: \"kubernetes.io/projected/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-kube-api-access-4rtd2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nvzd9\" (UID: \"da8b04f3-a31b-4ce5-b798-73bb52adb2bb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.294000 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nvzd9\" (UID: \"da8b04f3-a31b-4ce5-b798-73bb52adb2bb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.396185 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rtd2\" (UniqueName: \"kubernetes.io/projected/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-kube-api-access-4rtd2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nvzd9\" (UID: \"da8b04f3-a31b-4ce5-b798-73bb52adb2bb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.396268 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nvzd9\" (UID: \"da8b04f3-a31b-4ce5-b798-73bb52adb2bb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.396384 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nvzd9\" (UID: \"da8b04f3-a31b-4ce5-b798-73bb52adb2bb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.401906 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nvzd9\" (UID: \"da8b04f3-a31b-4ce5-b798-73bb52adb2bb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.401958 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nvzd9\" (UID: \"da8b04f3-a31b-4ce5-b798-73bb52adb2bb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.416698 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rtd2\" (UniqueName: \"kubernetes.io/projected/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-kube-api-access-4rtd2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nvzd9\" (UID: \"da8b04f3-a31b-4ce5-b798-73bb52adb2bb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.500479 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" Feb 02 11:29:24 crc kubenswrapper[4925]: I0202 11:29:24.983799 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9"] Feb 02 11:29:25 crc kubenswrapper[4925]: I0202 11:29:25.110346 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" event={"ID":"da8b04f3-a31b-4ce5-b798-73bb52adb2bb","Type":"ContainerStarted","Data":"4cc63a921506186338d7e8d6ad6ad4245e241e494c707615d715f06671431937"} Feb 02 11:29:26 crc kubenswrapper[4925]: I0202 11:29:26.120028 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" event={"ID":"da8b04f3-a31b-4ce5-b798-73bb52adb2bb","Type":"ContainerStarted","Data":"3f228565c4fe0253647075c0d42779d0037ce915700b10d0a07a03b8112cebe3"} Feb 02 11:29:26 crc kubenswrapper[4925]: I0202 11:29:26.144879 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" podStartSLOduration=1.73527802 podStartE2EDuration="2.144858199s" podCreationTimestamp="2026-02-02 11:29:24 +0000 UTC" firstStartedPulling="2026-02-02 11:29:24.991627207 +0000 UTC m=+1941.995876169" lastFinishedPulling="2026-02-02 11:29:25.401207376 +0000 UTC m=+1942.405456348" observedRunningTime="2026-02-02 11:29:26.13594776 +0000 UTC m=+1943.140196722" watchObservedRunningTime="2026-02-02 11:29:26.144858199 +0000 UTC m=+1943.149107161" Feb 02 11:29:27 crc kubenswrapper[4925]: I0202 11:29:27.866169 4925 scope.go:117] "RemoveContainer" containerID="a4b858bb2109379e1e27e59e23882e68bf3438b4f519bcfb874fe48ebcbd675b" Feb 02 11:29:28 crc kubenswrapper[4925]: I0202 11:29:28.056107 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-xxnvr"] Feb 02 11:29:28 crc kubenswrapper[4925]: I0202 11:29:28.065913 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-xxnvr"] Feb 02 11:29:28 crc kubenswrapper[4925]: I0202 11:29:28.675148 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20d81564-431d-40c5-be81-3961fab3e8b8" path="/var/lib/kubelet/pods/20d81564-431d-40c5-be81-3961fab3e8b8/volumes" Feb 02 11:29:30 crc kubenswrapper[4925]: I0202 11:29:30.026013 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-gz8kp"] Feb 02 11:29:30 crc kubenswrapper[4925]: I0202 11:29:30.035643 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-gz8kp"] Feb 02 11:29:30 crc kubenswrapper[4925]: I0202 11:29:30.676340 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e0ab2c-a590-4b39-af8b-a055a29f01c0" path="/var/lib/kubelet/pods/15e0ab2c-a590-4b39-af8b-a055a29f01c0/volumes" Feb 02 11:29:32 crc kubenswrapper[4925]: I0202 11:29:32.030329 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-xtzng"] Feb 02 11:29:32 crc kubenswrapper[4925]: I0202 11:29:32.037183 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-xtzng"] Feb 02 11:29:32 crc kubenswrapper[4925]: I0202 11:29:32.674634 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66bbba42-9e45-446e-8042-a428a6269d08" path="/var/lib/kubelet/pods/66bbba42-9e45-446e-8042-a428a6269d08/volumes" Feb 02 11:29:33 crc kubenswrapper[4925]: I0202 11:29:33.172546 4925 generic.go:334] "Generic (PLEG): container finished" podID="da8b04f3-a31b-4ce5-b798-73bb52adb2bb" containerID="3f228565c4fe0253647075c0d42779d0037ce915700b10d0a07a03b8112cebe3" exitCode=0 Feb 02 11:29:33 crc kubenswrapper[4925]: I0202 11:29:33.172595 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" event={"ID":"da8b04f3-a31b-4ce5-b798-73bb52adb2bb","Type":"ContainerDied","Data":"3f228565c4fe0253647075c0d42779d0037ce915700b10d0a07a03b8112cebe3"} Feb 02 11:29:34 crc kubenswrapper[4925]: I0202 11:29:34.561480 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" Feb 02 11:29:34 crc kubenswrapper[4925]: I0202 11:29:34.680326 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-inventory\") pod \"da8b04f3-a31b-4ce5-b798-73bb52adb2bb\" (UID: \"da8b04f3-a31b-4ce5-b798-73bb52adb2bb\") " Feb 02 11:29:34 crc kubenswrapper[4925]: I0202 11:29:34.680465 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-ssh-key-openstack-edpm-ipam\") pod \"da8b04f3-a31b-4ce5-b798-73bb52adb2bb\" (UID: \"da8b04f3-a31b-4ce5-b798-73bb52adb2bb\") " Feb 02 11:29:34 crc kubenswrapper[4925]: I0202 11:29:34.680558 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rtd2\" (UniqueName: \"kubernetes.io/projected/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-kube-api-access-4rtd2\") pod \"da8b04f3-a31b-4ce5-b798-73bb52adb2bb\" (UID: \"da8b04f3-a31b-4ce5-b798-73bb52adb2bb\") " Feb 02 11:29:34 crc kubenswrapper[4925]: I0202 11:29:34.685684 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-kube-api-access-4rtd2" (OuterVolumeSpecName: "kube-api-access-4rtd2") pod "da8b04f3-a31b-4ce5-b798-73bb52adb2bb" (UID: "da8b04f3-a31b-4ce5-b798-73bb52adb2bb"). InnerVolumeSpecName "kube-api-access-4rtd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:34 crc kubenswrapper[4925]: I0202 11:29:34.706756 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "da8b04f3-a31b-4ce5-b798-73bb52adb2bb" (UID: "da8b04f3-a31b-4ce5-b798-73bb52adb2bb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:34 crc kubenswrapper[4925]: I0202 11:29:34.711409 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-inventory" (OuterVolumeSpecName: "inventory") pod "da8b04f3-a31b-4ce5-b798-73bb52adb2bb" (UID: "da8b04f3-a31b-4ce5-b798-73bb52adb2bb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:34 crc kubenswrapper[4925]: I0202 11:29:34.783283 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:34 crc kubenswrapper[4925]: I0202 11:29:34.783322 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:34 crc kubenswrapper[4925]: I0202 11:29:34.783339 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rtd2\" (UniqueName: \"kubernetes.io/projected/da8b04f3-a31b-4ce5-b798-73bb52adb2bb-kube-api-access-4rtd2\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.027406 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-79pz8"] Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.036682 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-79pz8"] Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.196625 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" event={"ID":"da8b04f3-a31b-4ce5-b798-73bb52adb2bb","Type":"ContainerDied","Data":"4cc63a921506186338d7e8d6ad6ad4245e241e494c707615d715f06671431937"} Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.196681 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cc63a921506186338d7e8d6ad6ad4245e241e494c707615d715f06671431937" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.196713 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.261607 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs"] Feb 02 11:29:35 crc kubenswrapper[4925]: E0202 11:29:35.262151 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8b04f3-a31b-4ce5-b798-73bb52adb2bb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.262168 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8b04f3-a31b-4ce5-b798-73bb52adb2bb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.262339 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="da8b04f3-a31b-4ce5-b798-73bb52adb2bb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.262898 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.265796 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.265820 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.266056 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.268041 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.288306 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs"] Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.393936 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/365118e6-4d8c-42c2-8880-f4fd3ec28561-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs\" (UID: \"365118e6-4d8c-42c2-8880-f4fd3ec28561\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.394002 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x6m2\" (UniqueName: \"kubernetes.io/projected/365118e6-4d8c-42c2-8880-f4fd3ec28561-kube-api-access-4x6m2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs\" (UID: \"365118e6-4d8c-42c2-8880-f4fd3ec28561\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.394200 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/365118e6-4d8c-42c2-8880-f4fd3ec28561-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs\" (UID: \"365118e6-4d8c-42c2-8880-f4fd3ec28561\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.495572 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/365118e6-4d8c-42c2-8880-f4fd3ec28561-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs\" (UID: \"365118e6-4d8c-42c2-8880-f4fd3ec28561\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.495899 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/365118e6-4d8c-42c2-8880-f4fd3ec28561-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs\" (UID: \"365118e6-4d8c-42c2-8880-f4fd3ec28561\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.495932 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x6m2\" (UniqueName: \"kubernetes.io/projected/365118e6-4d8c-42c2-8880-f4fd3ec28561-kube-api-access-4x6m2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs\" (UID: \"365118e6-4d8c-42c2-8880-f4fd3ec28561\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.500748 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/365118e6-4d8c-42c2-8880-f4fd3ec28561-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs\" (UID: \"365118e6-4d8c-42c2-8880-f4fd3ec28561\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.504731 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/365118e6-4d8c-42c2-8880-f4fd3ec28561-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs\" (UID: \"365118e6-4d8c-42c2-8880-f4fd3ec28561\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.514166 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x6m2\" (UniqueName: \"kubernetes.io/projected/365118e6-4d8c-42c2-8880-f4fd3ec28561-kube-api-access-4x6m2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs\" (UID: \"365118e6-4d8c-42c2-8880-f4fd3ec28561\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" Feb 02 11:29:35 crc kubenswrapper[4925]: I0202 11:29:35.582454 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" Feb 02 11:29:36 crc kubenswrapper[4925]: I0202 11:29:36.117061 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs"] Feb 02 11:29:36 crc kubenswrapper[4925]: W0202 11:29:36.121580 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod365118e6_4d8c_42c2_8880_f4fd3ec28561.slice/crio-b745a2e3d1e40297464e4d439e180bd6ee0f986c473308b189fcd28ae08f9fb7 WatchSource:0}: Error finding container b745a2e3d1e40297464e4d439e180bd6ee0f986c473308b189fcd28ae08f9fb7: Status 404 returned error can't find the container with id b745a2e3d1e40297464e4d439e180bd6ee0f986c473308b189fcd28ae08f9fb7 Feb 02 11:29:36 crc kubenswrapper[4925]: I0202 11:29:36.211696 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" event={"ID":"365118e6-4d8c-42c2-8880-f4fd3ec28561","Type":"ContainerStarted","Data":"b745a2e3d1e40297464e4d439e180bd6ee0f986c473308b189fcd28ae08f9fb7"} Feb 02 11:29:36 crc kubenswrapper[4925]: I0202 11:29:36.676264 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c13cad78-20d5-4520-81f0-3677e98a64c5" path="/var/lib/kubelet/pods/c13cad78-20d5-4520-81f0-3677e98a64c5/volumes" Feb 02 11:29:37 crc kubenswrapper[4925]: I0202 11:29:37.042754 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-8wd28"] Feb 02 11:29:37 crc kubenswrapper[4925]: I0202 11:29:37.049717 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-22lwh"] Feb 02 11:29:37 crc kubenswrapper[4925]: I0202 11:29:37.057149 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8005-account-create-update-49drq"] Feb 02 11:29:37 crc kubenswrapper[4925]: I0202 11:29:37.077460 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-269f-account-create-update-dncxz"] Feb 02 11:29:37 crc kubenswrapper[4925]: I0202 11:29:37.088501 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-8wd28"] Feb 02 11:29:37 crc kubenswrapper[4925]: I0202 11:29:37.095794 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-269f-account-create-update-dncxz"] Feb 02 11:29:37 crc kubenswrapper[4925]: I0202 11:29:37.102434 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7e77-account-create-update-qkg5j"] Feb 02 11:29:37 crc kubenswrapper[4925]: I0202 11:29:37.109410 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8005-account-create-update-49drq"] Feb 02 11:29:37 crc kubenswrapper[4925]: I0202 11:29:37.116295 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-22lwh"] Feb 02 11:29:37 crc kubenswrapper[4925]: I0202 11:29:37.123667 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7e77-account-create-update-qkg5j"] Feb 02 11:29:37 crc kubenswrapper[4925]: I0202 11:29:37.221054 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" event={"ID":"365118e6-4d8c-42c2-8880-f4fd3ec28561","Type":"ContainerStarted","Data":"60a1b354c24d39924ebc537ea75bed0b3ad4c327b53caaa78f0368ae59ebb043"} Feb 02 11:29:37 crc kubenswrapper[4925]: I0202 11:29:37.244104 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" podStartSLOduration=1.8205342610000002 podStartE2EDuration="2.244059344s" podCreationTimestamp="2026-02-02 11:29:35 +0000 UTC" firstStartedPulling="2026-02-02 11:29:36.124717252 +0000 UTC m=+1953.128966214" lastFinishedPulling="2026-02-02 11:29:36.548242335 +0000 UTC m=+1953.552491297" observedRunningTime="2026-02-02 11:29:37.241367912 +0000 UTC m=+1954.245616904" watchObservedRunningTime="2026-02-02 11:29:37.244059344 +0000 UTC m=+1954.248308306" Feb 02 11:29:38 crc kubenswrapper[4925]: I0202 11:29:38.677994 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36eda75d-2be8-431a-9562-95965aa5e22d" path="/var/lib/kubelet/pods/36eda75d-2be8-431a-9562-95965aa5e22d/volumes" Feb 02 11:29:38 crc kubenswrapper[4925]: I0202 11:29:38.678979 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59663ecf-67bb-464d-a56a-0246eee949cc" path="/var/lib/kubelet/pods/59663ecf-67bb-464d-a56a-0246eee949cc/volumes" Feb 02 11:29:38 crc kubenswrapper[4925]: I0202 11:29:38.679520 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="724c56e3-b799-47d7-9374-a06c2d5cd6f9" path="/var/lib/kubelet/pods/724c56e3-b799-47d7-9374-a06c2d5cd6f9/volumes" Feb 02 11:29:38 crc kubenswrapper[4925]: I0202 11:29:38.680107 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95edd3b0-5a13-4845-bfbf-5e8572214a57" path="/var/lib/kubelet/pods/95edd3b0-5a13-4845-bfbf-5e8572214a57/volumes" Feb 02 11:29:38 crc kubenswrapper[4925]: I0202 11:29:38.681128 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d99078da-9bce-4614-a4a8-e78da62b3f39" path="/var/lib/kubelet/pods/d99078da-9bce-4614-a4a8-e78da62b3f39/volumes" Feb 02 11:29:46 crc kubenswrapper[4925]: I0202 11:29:46.291542 4925 generic.go:334] "Generic (PLEG): container finished" podID="365118e6-4d8c-42c2-8880-f4fd3ec28561" containerID="60a1b354c24d39924ebc537ea75bed0b3ad4c327b53caaa78f0368ae59ebb043" exitCode=0 Feb 02 11:29:46 crc kubenswrapper[4925]: I0202 11:29:46.291664 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" event={"ID":"365118e6-4d8c-42c2-8880-f4fd3ec28561","Type":"ContainerDied","Data":"60a1b354c24d39924ebc537ea75bed0b3ad4c327b53caaa78f0368ae59ebb043"} Feb 02 11:29:47 crc kubenswrapper[4925]: I0202 11:29:47.675224 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" Feb 02 11:29:47 crc kubenswrapper[4925]: I0202 11:29:47.709591 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/365118e6-4d8c-42c2-8880-f4fd3ec28561-inventory\") pod \"365118e6-4d8c-42c2-8880-f4fd3ec28561\" (UID: \"365118e6-4d8c-42c2-8880-f4fd3ec28561\") " Feb 02 11:29:47 crc kubenswrapper[4925]: I0202 11:29:47.709790 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/365118e6-4d8c-42c2-8880-f4fd3ec28561-ssh-key-openstack-edpm-ipam\") pod \"365118e6-4d8c-42c2-8880-f4fd3ec28561\" (UID: \"365118e6-4d8c-42c2-8880-f4fd3ec28561\") " Feb 02 11:29:47 crc kubenswrapper[4925]: I0202 11:29:47.709940 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x6m2\" (UniqueName: \"kubernetes.io/projected/365118e6-4d8c-42c2-8880-f4fd3ec28561-kube-api-access-4x6m2\") pod \"365118e6-4d8c-42c2-8880-f4fd3ec28561\" (UID: \"365118e6-4d8c-42c2-8880-f4fd3ec28561\") " Feb 02 11:29:47 crc kubenswrapper[4925]: I0202 11:29:47.716464 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/365118e6-4d8c-42c2-8880-f4fd3ec28561-kube-api-access-4x6m2" (OuterVolumeSpecName: "kube-api-access-4x6m2") pod "365118e6-4d8c-42c2-8880-f4fd3ec28561" (UID: "365118e6-4d8c-42c2-8880-f4fd3ec28561"). InnerVolumeSpecName "kube-api-access-4x6m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:47 crc kubenswrapper[4925]: I0202 11:29:47.734742 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365118e6-4d8c-42c2-8880-f4fd3ec28561-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "365118e6-4d8c-42c2-8880-f4fd3ec28561" (UID: "365118e6-4d8c-42c2-8880-f4fd3ec28561"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:47 crc kubenswrapper[4925]: I0202 11:29:47.744196 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365118e6-4d8c-42c2-8880-f4fd3ec28561-inventory" (OuterVolumeSpecName: "inventory") pod "365118e6-4d8c-42c2-8880-f4fd3ec28561" (UID: "365118e6-4d8c-42c2-8880-f4fd3ec28561"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:47 crc kubenswrapper[4925]: I0202 11:29:47.813785 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/365118e6-4d8c-42c2-8880-f4fd3ec28561-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:47 crc kubenswrapper[4925]: I0202 11:29:47.813814 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/365118e6-4d8c-42c2-8880-f4fd3ec28561-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:47 crc kubenswrapper[4925]: I0202 11:29:47.813826 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x6m2\" (UniqueName: \"kubernetes.io/projected/365118e6-4d8c-42c2-8880-f4fd3ec28561-kube-api-access-4x6m2\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:48 crc kubenswrapper[4925]: I0202 11:29:48.316656 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" event={"ID":"365118e6-4d8c-42c2-8880-f4fd3ec28561","Type":"ContainerDied","Data":"b745a2e3d1e40297464e4d439e180bd6ee0f986c473308b189fcd28ae08f9fb7"} Feb 02 11:29:48 crc kubenswrapper[4925]: I0202 11:29:48.316709 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b745a2e3d1e40297464e4d439e180bd6ee0f986c473308b189fcd28ae08f9fb7" Feb 02 11:29:48 crc kubenswrapper[4925]: I0202 11:29:48.316744 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs" Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.158488 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf"] Feb 02 11:30:00 crc kubenswrapper[4925]: E0202 11:30:00.159481 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365118e6-4d8c-42c2-8880-f4fd3ec28561" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.159501 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="365118e6-4d8c-42c2-8880-f4fd3ec28561" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.159719 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="365118e6-4d8c-42c2-8880-f4fd3ec28561" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.160437 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf" Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.169100 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf"] Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.173616 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.173941 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.241602 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9787c20-6741-4ac0-ac09-3a0b09c212f3-config-volume\") pod \"collect-profiles-29500530-t5nxf\" (UID: \"d9787c20-6741-4ac0-ac09-3a0b09c212f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf" Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.241761 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zs5c\" (UniqueName: \"kubernetes.io/projected/d9787c20-6741-4ac0-ac09-3a0b09c212f3-kube-api-access-2zs5c\") pod \"collect-profiles-29500530-t5nxf\" (UID: \"d9787c20-6741-4ac0-ac09-3a0b09c212f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf" Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.241940 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9787c20-6741-4ac0-ac09-3a0b09c212f3-secret-volume\") pod \"collect-profiles-29500530-t5nxf\" (UID: \"d9787c20-6741-4ac0-ac09-3a0b09c212f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf" Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.343529 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zs5c\" (UniqueName: \"kubernetes.io/projected/d9787c20-6741-4ac0-ac09-3a0b09c212f3-kube-api-access-2zs5c\") pod \"collect-profiles-29500530-t5nxf\" (UID: \"d9787c20-6741-4ac0-ac09-3a0b09c212f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf" Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.343692 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9787c20-6741-4ac0-ac09-3a0b09c212f3-secret-volume\") pod \"collect-profiles-29500530-t5nxf\" (UID: \"d9787c20-6741-4ac0-ac09-3a0b09c212f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf" Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.343793 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9787c20-6741-4ac0-ac09-3a0b09c212f3-config-volume\") pod \"collect-profiles-29500530-t5nxf\" (UID: \"d9787c20-6741-4ac0-ac09-3a0b09c212f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf" Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.344877 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9787c20-6741-4ac0-ac09-3a0b09c212f3-config-volume\") pod \"collect-profiles-29500530-t5nxf\" (UID: \"d9787c20-6741-4ac0-ac09-3a0b09c212f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf" Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.350276 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9787c20-6741-4ac0-ac09-3a0b09c212f3-secret-volume\") pod \"collect-profiles-29500530-t5nxf\" (UID: \"d9787c20-6741-4ac0-ac09-3a0b09c212f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf" Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.368135 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zs5c\" (UniqueName: \"kubernetes.io/projected/d9787c20-6741-4ac0-ac09-3a0b09c212f3-kube-api-access-2zs5c\") pod \"collect-profiles-29500530-t5nxf\" (UID: \"d9787c20-6741-4ac0-ac09-3a0b09c212f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf" Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.484428 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf" Feb 02 11:30:00 crc kubenswrapper[4925]: I0202 11:30:00.947432 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf"] Feb 02 11:30:01 crc kubenswrapper[4925]: I0202 11:30:01.438517 4925 generic.go:334] "Generic (PLEG): container finished" podID="d9787c20-6741-4ac0-ac09-3a0b09c212f3" containerID="92ce0fd69b1ff68fb2e42f4955f4ded464107f95caae9007ea43dc8edd78cb29" exitCode=0 Feb 02 11:30:01 crc kubenswrapper[4925]: I0202 11:30:01.438623 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf" event={"ID":"d9787c20-6741-4ac0-ac09-3a0b09c212f3","Type":"ContainerDied","Data":"92ce0fd69b1ff68fb2e42f4955f4ded464107f95caae9007ea43dc8edd78cb29"} Feb 02 11:30:01 crc kubenswrapper[4925]: I0202 11:30:01.438865 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf" event={"ID":"d9787c20-6741-4ac0-ac09-3a0b09c212f3","Type":"ContainerStarted","Data":"589c5e96cb4b2db63b0ab4ca0640114ac2dd51ba2785d09f993240e6635958a2"} Feb 02 11:30:02 crc kubenswrapper[4925]: I0202 11:30:02.751388 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf" Feb 02 11:30:02 crc kubenswrapper[4925]: I0202 11:30:02.793943 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9787c20-6741-4ac0-ac09-3a0b09c212f3-secret-volume\") pod \"d9787c20-6741-4ac0-ac09-3a0b09c212f3\" (UID: \"d9787c20-6741-4ac0-ac09-3a0b09c212f3\") " Feb 02 11:30:02 crc kubenswrapper[4925]: I0202 11:30:02.794028 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9787c20-6741-4ac0-ac09-3a0b09c212f3-config-volume\") pod \"d9787c20-6741-4ac0-ac09-3a0b09c212f3\" (UID: \"d9787c20-6741-4ac0-ac09-3a0b09c212f3\") " Feb 02 11:30:02 crc kubenswrapper[4925]: I0202 11:30:02.794176 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zs5c\" (UniqueName: \"kubernetes.io/projected/d9787c20-6741-4ac0-ac09-3a0b09c212f3-kube-api-access-2zs5c\") pod \"d9787c20-6741-4ac0-ac09-3a0b09c212f3\" (UID: \"d9787c20-6741-4ac0-ac09-3a0b09c212f3\") " Feb 02 11:30:02 crc kubenswrapper[4925]: I0202 11:30:02.794790 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9787c20-6741-4ac0-ac09-3a0b09c212f3-config-volume" (OuterVolumeSpecName: "config-volume") pod "d9787c20-6741-4ac0-ac09-3a0b09c212f3" (UID: "d9787c20-6741-4ac0-ac09-3a0b09c212f3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:30:02 crc kubenswrapper[4925]: I0202 11:30:02.799895 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9787c20-6741-4ac0-ac09-3a0b09c212f3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d9787c20-6741-4ac0-ac09-3a0b09c212f3" (UID: "d9787c20-6741-4ac0-ac09-3a0b09c212f3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:02 crc kubenswrapper[4925]: I0202 11:30:02.800245 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9787c20-6741-4ac0-ac09-3a0b09c212f3-kube-api-access-2zs5c" (OuterVolumeSpecName: "kube-api-access-2zs5c") pod "d9787c20-6741-4ac0-ac09-3a0b09c212f3" (UID: "d9787c20-6741-4ac0-ac09-3a0b09c212f3"). InnerVolumeSpecName "kube-api-access-2zs5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:30:02 crc kubenswrapper[4925]: I0202 11:30:02.896288 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zs5c\" (UniqueName: \"kubernetes.io/projected/d9787c20-6741-4ac0-ac09-3a0b09c212f3-kube-api-access-2zs5c\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:02 crc kubenswrapper[4925]: I0202 11:30:02.896326 4925 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9787c20-6741-4ac0-ac09-3a0b09c212f3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:02 crc kubenswrapper[4925]: I0202 11:30:02.896338 4925 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9787c20-6741-4ac0-ac09-3a0b09c212f3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:03 crc kubenswrapper[4925]: I0202 11:30:03.455138 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf" event={"ID":"d9787c20-6741-4ac0-ac09-3a0b09c212f3","Type":"ContainerDied","Data":"589c5e96cb4b2db63b0ab4ca0640114ac2dd51ba2785d09f993240e6635958a2"} Feb 02 11:30:03 crc kubenswrapper[4925]: I0202 11:30:03.455453 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="589c5e96cb4b2db63b0ab4ca0640114ac2dd51ba2785d09f993240e6635958a2" Feb 02 11:30:03 crc kubenswrapper[4925]: I0202 11:30:03.455347 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf" Feb 02 11:30:03 crc kubenswrapper[4925]: I0202 11:30:03.818784 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq"] Feb 02 11:30:03 crc kubenswrapper[4925]: I0202 11:30:03.825282 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-xsfgq"] Feb 02 11:30:04 crc kubenswrapper[4925]: I0202 11:30:04.675585 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a87278a-c899-40df-99ef-324a5415be60" path="/var/lib/kubelet/pods/3a87278a-c899-40df-99ef-324a5415be60/volumes" Feb 02 11:30:10 crc kubenswrapper[4925]: I0202 11:30:10.036470 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9cjl8"] Feb 02 11:30:10 crc kubenswrapper[4925]: I0202 11:30:10.043086 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9cjl8"] Feb 02 11:30:10 crc kubenswrapper[4925]: I0202 11:30:10.674600 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9ab37c5-8a76-48f2-ade7-92735dc062c4" path="/var/lib/kubelet/pods/e9ab37c5-8a76-48f2-ade7-92735dc062c4/volumes" Feb 02 11:30:17 crc kubenswrapper[4925]: I0202 11:30:17.025262 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-d8tqm"] Feb 02 11:30:17 crc kubenswrapper[4925]: I0202 11:30:17.034053 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-d8tqm"] Feb 02 11:30:18 crc kubenswrapper[4925]: I0202 11:30:18.674541 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b26690f1-3d10-4ef3-a16a-7c33dc1c62c0" path="/var/lib/kubelet/pods/b26690f1-3d10-4ef3-a16a-7c33dc1c62c0/volumes" Feb 02 11:30:27 crc kubenswrapper[4925]: I0202 11:30:27.975937 4925 scope.go:117] "RemoveContainer" containerID="904e9cf8bf2d427ca1214200e077aff4370afbced0959f9f89e43ff54f630981" Feb 02 11:30:28 crc kubenswrapper[4925]: I0202 11:30:28.010554 4925 scope.go:117] "RemoveContainer" containerID="c435f515ee52e76b55caaa8ade2d4d35413a90e0394d4fb2904420836f56e1a6" Feb 02 11:30:28 crc kubenswrapper[4925]: I0202 11:30:28.066911 4925 scope.go:117] "RemoveContainer" containerID="18f734283bcd663a6e17b14ab37d7800408cddea2006d707687829dfdf661294" Feb 02 11:30:28 crc kubenswrapper[4925]: I0202 11:30:28.107639 4925 scope.go:117] "RemoveContainer" containerID="1c2e4a77dbc7f61e608f64e191cf8dab8fe2d8a4a8eed742bef5af1f476b63eb" Feb 02 11:30:28 crc kubenswrapper[4925]: I0202 11:30:28.142002 4925 scope.go:117] "RemoveContainer" containerID="5e5b59e06dbdb7770342482bb43726801237d195b13a533dccd4490a20883f2b" Feb 02 11:30:28 crc kubenswrapper[4925]: I0202 11:30:28.184391 4925 scope.go:117] "RemoveContainer" containerID="fa986dea4770810b31b37d0dfc9c196b9b4e872e47ecb9f6e05b47e90880642f" Feb 02 11:30:28 crc kubenswrapper[4925]: I0202 11:30:28.230741 4925 scope.go:117] "RemoveContainer" containerID="f4531b9b986db1f5008d4506a7efe7089561e36ecc4368f1e2878f3190f1988c" Feb 02 11:30:28 crc kubenswrapper[4925]: I0202 11:30:28.248356 4925 scope.go:117] "RemoveContainer" containerID="abc4d3d7435f0b4c9500d1243cb62e79abe1e544f573cc24f7e9c687093995dc" Feb 02 11:30:28 crc kubenswrapper[4925]: I0202 11:30:28.272637 4925 scope.go:117] "RemoveContainer" containerID="7535733d48bbea7204459e4bc5a644fce405727796bf303cfe6be37054da990c" Feb 02 11:30:28 crc kubenswrapper[4925]: I0202 11:30:28.294377 4925 scope.go:117] "RemoveContainer" containerID="cf3a517b2cd796ca28b1a9fe64e8cb308993e7a69c4c0bec2db79a6d11eeb1c0" Feb 02 11:30:28 crc kubenswrapper[4925]: I0202 11:30:28.334766 4925 scope.go:117] "RemoveContainer" containerID="6da9662bb56f1903a4f12e6a7190939a797d70d67bc24d74d3fa653893762626" Feb 02 11:30:28 crc kubenswrapper[4925]: I0202 11:30:28.365873 4925 scope.go:117] "RemoveContainer" containerID="f91c31055d3438fff777db49c5b7837a20608d3c72aa29b206b2569b706065b1" Feb 02 11:30:32 crc kubenswrapper[4925]: I0202 11:30:32.041800 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r7982"] Feb 02 11:30:32 crc kubenswrapper[4925]: I0202 11:30:32.049592 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r7982"] Feb 02 11:30:32 crc kubenswrapper[4925]: I0202 11:30:32.674764 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49fa273c-1c74-4898-9a16-547d9397e0da" path="/var/lib/kubelet/pods/49fa273c-1c74-4898-9a16-547d9397e0da/volumes" Feb 02 11:30:51 crc kubenswrapper[4925]: I0202 11:30:51.042323 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-rj59c"] Feb 02 11:30:51 crc kubenswrapper[4925]: I0202 11:30:51.054034 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-rj59c"] Feb 02 11:30:52 crc kubenswrapper[4925]: I0202 11:30:52.673780 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23811ff6-75a2-4ea6-9ebb-bbca86b5cb38" path="/var/lib/kubelet/pods/23811ff6-75a2-4ea6-9ebb-bbca86b5cb38/volumes" Feb 02 11:31:13 crc kubenswrapper[4925]: I0202 11:31:13.038856 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w48pz"] Feb 02 11:31:13 crc kubenswrapper[4925]: I0202 11:31:13.048362 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w48pz"] Feb 02 11:31:14 crc kubenswrapper[4925]: I0202 11:31:14.677798 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5e37af9-8c2c-4349-8496-4af3ce643c26" path="/var/lib/kubelet/pods/c5e37af9-8c2c-4349-8496-4af3ce643c26/volumes" Feb 02 11:31:28 crc kubenswrapper[4925]: I0202 11:31:28.572487 4925 scope.go:117] "RemoveContainer" containerID="d1741f20250c77558b3d93b3bbc5ab72807b100e5b7c7a0c2cafeac50c50af7e" Feb 02 11:31:28 crc kubenswrapper[4925]: I0202 11:31:28.619930 4925 scope.go:117] "RemoveContainer" containerID="3c26de6cfefc8ac9af187925e530198add4d54e8708f781cc0adf3aa067fea01" Feb 02 11:31:28 crc kubenswrapper[4925]: I0202 11:31:28.653861 4925 scope.go:117] "RemoveContainer" containerID="26722a9ff8843dbf64103f97b7e1c266525c9e970107164e240fb31472bc3990" Feb 02 11:31:39 crc kubenswrapper[4925]: I0202 11:31:39.039132 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-p8z6b"] Feb 02 11:31:39 crc kubenswrapper[4925]: I0202 11:31:39.051293 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-p8z6b"] Feb 02 11:31:40 crc kubenswrapper[4925]: I0202 11:31:40.674794 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a09787b-0714-46c8-9617-aedde4f0d773" path="/var/lib/kubelet/pods/8a09787b-0714-46c8-9617-aedde4f0d773/volumes" Feb 02 11:31:43 crc kubenswrapper[4925]: I0202 11:31:43.398514 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:31:43 crc kubenswrapper[4925]: I0202 11:31:43.399009 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:32:13 crc kubenswrapper[4925]: I0202 11:32:13.398482 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:32:13 crc kubenswrapper[4925]: I0202 11:32:13.399064 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:32:28 crc kubenswrapper[4925]: I0202 11:32:28.737242 4925 scope.go:117] "RemoveContainer" containerID="442e14b8435bc6177c9d95b3bf8c0c53e6704f6604fcef94061ddf21a18ac3a1" Feb 02 11:32:43 crc kubenswrapper[4925]: I0202 11:32:43.398576 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:32:43 crc kubenswrapper[4925]: I0202 11:32:43.399197 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:32:43 crc kubenswrapper[4925]: I0202 11:32:43.399279 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 11:32:43 crc kubenswrapper[4925]: I0202 11:32:43.400439 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32e3054c374b3ba80c40613fe344e538f1d343435befd2568b4687e277c37ce4"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:32:43 crc kubenswrapper[4925]: I0202 11:32:43.400541 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://32e3054c374b3ba80c40613fe344e538f1d343435befd2568b4687e277c37ce4" gracePeriod=600 Feb 02 11:32:43 crc kubenswrapper[4925]: I0202 11:32:43.764143 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="32e3054c374b3ba80c40613fe344e538f1d343435befd2568b4687e277c37ce4" exitCode=0 Feb 02 11:32:43 crc kubenswrapper[4925]: I0202 11:32:43.764195 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"32e3054c374b3ba80c40613fe344e538f1d343435befd2568b4687e277c37ce4"} Feb 02 11:32:43 crc kubenswrapper[4925]: I0202 11:32:43.764238 4925 scope.go:117] "RemoveContainer" containerID="2220ba6ff298a326bc53001a8c7441c1936ad1559f626dd26ada50cc4b0a41ff" Feb 02 11:32:44 crc kubenswrapper[4925]: I0202 11:32:44.773396 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42"} Feb 02 11:33:04 crc kubenswrapper[4925]: I0202 11:33:04.710325 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h2d2g"] Feb 02 11:33:04 crc kubenswrapper[4925]: E0202 11:33:04.711388 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9787c20-6741-4ac0-ac09-3a0b09c212f3" containerName="collect-profiles" Feb 02 11:33:04 crc kubenswrapper[4925]: I0202 11:33:04.711406 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9787c20-6741-4ac0-ac09-3a0b09c212f3" containerName="collect-profiles" Feb 02 11:33:04 crc kubenswrapper[4925]: I0202 11:33:04.711621 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9787c20-6741-4ac0-ac09-3a0b09c212f3" containerName="collect-profiles" Feb 02 11:33:04 crc kubenswrapper[4925]: I0202 11:33:04.712945 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2d2g" Feb 02 11:33:04 crc kubenswrapper[4925]: I0202 11:33:04.725457 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnf6l\" (UniqueName: \"kubernetes.io/projected/796f0552-3263-4aa8-ae0e-3066641a789e-kube-api-access-tnf6l\") pod \"redhat-marketplace-h2d2g\" (UID: \"796f0552-3263-4aa8-ae0e-3066641a789e\") " pod="openshift-marketplace/redhat-marketplace-h2d2g" Feb 02 11:33:04 crc kubenswrapper[4925]: I0202 11:33:04.725904 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796f0552-3263-4aa8-ae0e-3066641a789e-catalog-content\") pod \"redhat-marketplace-h2d2g\" (UID: \"796f0552-3263-4aa8-ae0e-3066641a789e\") " pod="openshift-marketplace/redhat-marketplace-h2d2g" Feb 02 11:33:04 crc kubenswrapper[4925]: I0202 11:33:04.726040 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796f0552-3263-4aa8-ae0e-3066641a789e-utilities\") pod \"redhat-marketplace-h2d2g\" (UID: \"796f0552-3263-4aa8-ae0e-3066641a789e\") " pod="openshift-marketplace/redhat-marketplace-h2d2g" Feb 02 11:33:04 crc kubenswrapper[4925]: I0202 11:33:04.736296 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2d2g"] Feb 02 11:33:04 crc kubenswrapper[4925]: I0202 11:33:04.827699 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796f0552-3263-4aa8-ae0e-3066641a789e-catalog-content\") pod \"redhat-marketplace-h2d2g\" (UID: \"796f0552-3263-4aa8-ae0e-3066641a789e\") " pod="openshift-marketplace/redhat-marketplace-h2d2g" Feb 02 11:33:04 crc kubenswrapper[4925]: I0202 11:33:04.829004 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796f0552-3263-4aa8-ae0e-3066641a789e-catalog-content\") pod \"redhat-marketplace-h2d2g\" (UID: \"796f0552-3263-4aa8-ae0e-3066641a789e\") " pod="openshift-marketplace/redhat-marketplace-h2d2g" Feb 02 11:33:04 crc kubenswrapper[4925]: I0202 11:33:04.829283 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796f0552-3263-4aa8-ae0e-3066641a789e-utilities\") pod \"redhat-marketplace-h2d2g\" (UID: \"796f0552-3263-4aa8-ae0e-3066641a789e\") " pod="openshift-marketplace/redhat-marketplace-h2d2g" Feb 02 11:33:04 crc kubenswrapper[4925]: I0202 11:33:04.829418 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnf6l\" (UniqueName: \"kubernetes.io/projected/796f0552-3263-4aa8-ae0e-3066641a789e-kube-api-access-tnf6l\") pod \"redhat-marketplace-h2d2g\" (UID: \"796f0552-3263-4aa8-ae0e-3066641a789e\") " pod="openshift-marketplace/redhat-marketplace-h2d2g" Feb 02 11:33:04 crc kubenswrapper[4925]: I0202 11:33:04.832605 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796f0552-3263-4aa8-ae0e-3066641a789e-utilities\") pod \"redhat-marketplace-h2d2g\" (UID: \"796f0552-3263-4aa8-ae0e-3066641a789e\") " pod="openshift-marketplace/redhat-marketplace-h2d2g" Feb 02 11:33:04 crc kubenswrapper[4925]: I0202 11:33:04.852242 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnf6l\" (UniqueName: \"kubernetes.io/projected/796f0552-3263-4aa8-ae0e-3066641a789e-kube-api-access-tnf6l\") pod \"redhat-marketplace-h2d2g\" (UID: \"796f0552-3263-4aa8-ae0e-3066641a789e\") " pod="openshift-marketplace/redhat-marketplace-h2d2g" Feb 02 11:33:05 crc kubenswrapper[4925]: I0202 11:33:05.042877 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2d2g" Feb 02 11:33:05 crc kubenswrapper[4925]: I0202 11:33:05.515394 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2d2g"] Feb 02 11:33:05 crc kubenswrapper[4925]: I0202 11:33:05.930677 4925 generic.go:334] "Generic (PLEG): container finished" podID="796f0552-3263-4aa8-ae0e-3066641a789e" containerID="9c1ddd1b11fd80d77d96ba61376008c77ce935999d79de051db68813db8b7cd9" exitCode=0 Feb 02 11:33:05 crc kubenswrapper[4925]: I0202 11:33:05.930988 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2d2g" event={"ID":"796f0552-3263-4aa8-ae0e-3066641a789e","Type":"ContainerDied","Data":"9c1ddd1b11fd80d77d96ba61376008c77ce935999d79de051db68813db8b7cd9"} Feb 02 11:33:05 crc kubenswrapper[4925]: I0202 11:33:05.931016 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2d2g" event={"ID":"796f0552-3263-4aa8-ae0e-3066641a789e","Type":"ContainerStarted","Data":"fa69ae6085bb50cf0b3f1f9d9e7f78b6dd42c0a172d859e47b4ed7815e779370"} Feb 02 11:33:07 crc kubenswrapper[4925]: I0202 11:33:07.946253 4925 generic.go:334] "Generic (PLEG): container finished" podID="796f0552-3263-4aa8-ae0e-3066641a789e" containerID="306e2cdd08b9242ec916a68fc094d9f6e01cc29da4bc3f13d7e35c696f6ed627" exitCode=0 Feb 02 11:33:07 crc kubenswrapper[4925]: I0202 11:33:07.946458 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2d2g" event={"ID":"796f0552-3263-4aa8-ae0e-3066641a789e","Type":"ContainerDied","Data":"306e2cdd08b9242ec916a68fc094d9f6e01cc29da4bc3f13d7e35c696f6ed627"} Feb 02 11:33:08 crc kubenswrapper[4925]: I0202 11:33:08.955613 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2d2g" event={"ID":"796f0552-3263-4aa8-ae0e-3066641a789e","Type":"ContainerStarted","Data":"c496238d89a2d03b2992971643c5b09eaa99325cfbcddeb683f273d190c32e35"} Feb 02 11:33:08 crc kubenswrapper[4925]: I0202 11:33:08.981299 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h2d2g" podStartSLOduration=2.579375675 podStartE2EDuration="4.981277155s" podCreationTimestamp="2026-02-02 11:33:04 +0000 UTC" firstStartedPulling="2026-02-02 11:33:05.932872839 +0000 UTC m=+2162.937121801" lastFinishedPulling="2026-02-02 11:33:08.334774319 +0000 UTC m=+2165.339023281" observedRunningTime="2026-02-02 11:33:08.973502714 +0000 UTC m=+2165.977751686" watchObservedRunningTime="2026-02-02 11:33:08.981277155 +0000 UTC m=+2165.985526117" Feb 02 11:33:15 crc kubenswrapper[4925]: I0202 11:33:15.044458 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h2d2g" Feb 02 11:33:15 crc kubenswrapper[4925]: I0202 11:33:15.044961 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h2d2g" Feb 02 11:33:15 crc kubenswrapper[4925]: I0202 11:33:15.088915 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h2d2g" Feb 02 11:33:16 crc kubenswrapper[4925]: I0202 11:33:16.056402 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h2d2g" Feb 02 11:33:16 crc kubenswrapper[4925]: I0202 11:33:16.108117 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2d2g"] Feb 02 11:33:18 crc kubenswrapper[4925]: I0202 11:33:18.023403 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h2d2g" podUID="796f0552-3263-4aa8-ae0e-3066641a789e" containerName="registry-server" containerID="cri-o://c496238d89a2d03b2992971643c5b09eaa99325cfbcddeb683f273d190c32e35" gracePeriod=2 Feb 02 11:33:18 crc kubenswrapper[4925]: I0202 11:33:18.493239 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2d2g" Feb 02 11:33:18 crc kubenswrapper[4925]: I0202 11:33:18.687455 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796f0552-3263-4aa8-ae0e-3066641a789e-utilities\") pod \"796f0552-3263-4aa8-ae0e-3066641a789e\" (UID: \"796f0552-3263-4aa8-ae0e-3066641a789e\") " Feb 02 11:33:18 crc kubenswrapper[4925]: I0202 11:33:18.687579 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnf6l\" (UniqueName: \"kubernetes.io/projected/796f0552-3263-4aa8-ae0e-3066641a789e-kube-api-access-tnf6l\") pod \"796f0552-3263-4aa8-ae0e-3066641a789e\" (UID: \"796f0552-3263-4aa8-ae0e-3066641a789e\") " Feb 02 11:33:18 crc kubenswrapper[4925]: I0202 11:33:18.687670 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796f0552-3263-4aa8-ae0e-3066641a789e-catalog-content\") pod \"796f0552-3263-4aa8-ae0e-3066641a789e\" (UID: \"796f0552-3263-4aa8-ae0e-3066641a789e\") " Feb 02 11:33:18 crc kubenswrapper[4925]: I0202 11:33:18.688756 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/796f0552-3263-4aa8-ae0e-3066641a789e-utilities" (OuterVolumeSpecName: "utilities") pod "796f0552-3263-4aa8-ae0e-3066641a789e" (UID: "796f0552-3263-4aa8-ae0e-3066641a789e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:33:18 crc kubenswrapper[4925]: I0202 11:33:18.694104 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/796f0552-3263-4aa8-ae0e-3066641a789e-kube-api-access-tnf6l" (OuterVolumeSpecName: "kube-api-access-tnf6l") pod "796f0552-3263-4aa8-ae0e-3066641a789e" (UID: "796f0552-3263-4aa8-ae0e-3066641a789e"). InnerVolumeSpecName "kube-api-access-tnf6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:33:18 crc kubenswrapper[4925]: I0202 11:33:18.712964 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/796f0552-3263-4aa8-ae0e-3066641a789e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "796f0552-3263-4aa8-ae0e-3066641a789e" (UID: "796f0552-3263-4aa8-ae0e-3066641a789e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:33:18 crc kubenswrapper[4925]: I0202 11:33:18.790349 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnf6l\" (UniqueName: \"kubernetes.io/projected/796f0552-3263-4aa8-ae0e-3066641a789e-kube-api-access-tnf6l\") on node \"crc\" DevicePath \"\"" Feb 02 11:33:18 crc kubenswrapper[4925]: I0202 11:33:18.790386 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796f0552-3263-4aa8-ae0e-3066641a789e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:33:18 crc kubenswrapper[4925]: I0202 11:33:18.790395 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796f0552-3263-4aa8-ae0e-3066641a789e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:33:19 crc kubenswrapper[4925]: I0202 11:33:19.034129 4925 generic.go:334] "Generic (PLEG): container finished" podID="796f0552-3263-4aa8-ae0e-3066641a789e" containerID="c496238d89a2d03b2992971643c5b09eaa99325cfbcddeb683f273d190c32e35" exitCode=0 Feb 02 11:33:19 crc kubenswrapper[4925]: I0202 11:33:19.034180 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2d2g" event={"ID":"796f0552-3263-4aa8-ae0e-3066641a789e","Type":"ContainerDied","Data":"c496238d89a2d03b2992971643c5b09eaa99325cfbcddeb683f273d190c32e35"} Feb 02 11:33:19 crc kubenswrapper[4925]: I0202 11:33:19.034212 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2d2g" event={"ID":"796f0552-3263-4aa8-ae0e-3066641a789e","Type":"ContainerDied","Data":"fa69ae6085bb50cf0b3f1f9d9e7f78b6dd42c0a172d859e47b4ed7815e779370"} Feb 02 11:33:19 crc kubenswrapper[4925]: I0202 11:33:19.034230 4925 scope.go:117] "RemoveContainer" containerID="c496238d89a2d03b2992971643c5b09eaa99325cfbcddeb683f273d190c32e35" Feb 02 11:33:19 crc kubenswrapper[4925]: I0202 11:33:19.035188 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2d2g" Feb 02 11:33:19 crc kubenswrapper[4925]: I0202 11:33:19.056294 4925 scope.go:117] "RemoveContainer" containerID="306e2cdd08b9242ec916a68fc094d9f6e01cc29da4bc3f13d7e35c696f6ed627" Feb 02 11:33:19 crc kubenswrapper[4925]: I0202 11:33:19.073335 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2d2g"] Feb 02 11:33:19 crc kubenswrapper[4925]: I0202 11:33:19.079872 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2d2g"] Feb 02 11:33:19 crc kubenswrapper[4925]: I0202 11:33:19.096167 4925 scope.go:117] "RemoveContainer" containerID="9c1ddd1b11fd80d77d96ba61376008c77ce935999d79de051db68813db8b7cd9" Feb 02 11:33:19 crc kubenswrapper[4925]: I0202 11:33:19.122817 4925 scope.go:117] "RemoveContainer" containerID="c496238d89a2d03b2992971643c5b09eaa99325cfbcddeb683f273d190c32e35" Feb 02 11:33:19 crc kubenswrapper[4925]: E0202 11:33:19.123346 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c496238d89a2d03b2992971643c5b09eaa99325cfbcddeb683f273d190c32e35\": container with ID starting with c496238d89a2d03b2992971643c5b09eaa99325cfbcddeb683f273d190c32e35 not found: ID does not exist" containerID="c496238d89a2d03b2992971643c5b09eaa99325cfbcddeb683f273d190c32e35" Feb 02 11:33:19 crc kubenswrapper[4925]: I0202 11:33:19.123378 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c496238d89a2d03b2992971643c5b09eaa99325cfbcddeb683f273d190c32e35"} err="failed to get container status \"c496238d89a2d03b2992971643c5b09eaa99325cfbcddeb683f273d190c32e35\": rpc error: code = NotFound desc = could not find container \"c496238d89a2d03b2992971643c5b09eaa99325cfbcddeb683f273d190c32e35\": container with ID starting with c496238d89a2d03b2992971643c5b09eaa99325cfbcddeb683f273d190c32e35 not found: ID does not exist" Feb 02 11:33:19 crc kubenswrapper[4925]: I0202 11:33:19.123400 4925 scope.go:117] "RemoveContainer" containerID="306e2cdd08b9242ec916a68fc094d9f6e01cc29da4bc3f13d7e35c696f6ed627" Feb 02 11:33:19 crc kubenswrapper[4925]: E0202 11:33:19.123795 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"306e2cdd08b9242ec916a68fc094d9f6e01cc29da4bc3f13d7e35c696f6ed627\": container with ID starting with 306e2cdd08b9242ec916a68fc094d9f6e01cc29da4bc3f13d7e35c696f6ed627 not found: ID does not exist" containerID="306e2cdd08b9242ec916a68fc094d9f6e01cc29da4bc3f13d7e35c696f6ed627" Feb 02 11:33:19 crc kubenswrapper[4925]: I0202 11:33:19.123814 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"306e2cdd08b9242ec916a68fc094d9f6e01cc29da4bc3f13d7e35c696f6ed627"} err="failed to get container status \"306e2cdd08b9242ec916a68fc094d9f6e01cc29da4bc3f13d7e35c696f6ed627\": rpc error: code = NotFound desc = could not find container \"306e2cdd08b9242ec916a68fc094d9f6e01cc29da4bc3f13d7e35c696f6ed627\": container with ID starting with 306e2cdd08b9242ec916a68fc094d9f6e01cc29da4bc3f13d7e35c696f6ed627 not found: ID does not exist" Feb 02 11:33:19 crc kubenswrapper[4925]: I0202 11:33:19.123827 4925 scope.go:117] "RemoveContainer" containerID="9c1ddd1b11fd80d77d96ba61376008c77ce935999d79de051db68813db8b7cd9" Feb 02 11:33:19 crc kubenswrapper[4925]: E0202 11:33:19.124143 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c1ddd1b11fd80d77d96ba61376008c77ce935999d79de051db68813db8b7cd9\": container with ID starting with 9c1ddd1b11fd80d77d96ba61376008c77ce935999d79de051db68813db8b7cd9 not found: ID does not exist" containerID="9c1ddd1b11fd80d77d96ba61376008c77ce935999d79de051db68813db8b7cd9" Feb 02 11:33:19 crc kubenswrapper[4925]: I0202 11:33:19.124186 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c1ddd1b11fd80d77d96ba61376008c77ce935999d79de051db68813db8b7cd9"} err="failed to get container status \"9c1ddd1b11fd80d77d96ba61376008c77ce935999d79de051db68813db8b7cd9\": rpc error: code = NotFound desc = could not find container \"9c1ddd1b11fd80d77d96ba61376008c77ce935999d79de051db68813db8b7cd9\": container with ID starting with 9c1ddd1b11fd80d77d96ba61376008c77ce935999d79de051db68813db8b7cd9 not found: ID does not exist" Feb 02 11:33:20 crc kubenswrapper[4925]: I0202 11:33:20.674525 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="796f0552-3263-4aa8-ae0e-3066641a789e" path="/var/lib/kubelet/pods/796f0552-3263-4aa8-ae0e-3066641a789e/volumes" Feb 02 11:33:46 crc kubenswrapper[4925]: I0202 11:33:46.891206 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl"] Feb 02 11:33:46 crc kubenswrapper[4925]: I0202 11:33:46.901712 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf"] Feb 02 11:33:46 crc kubenswrapper[4925]: I0202 11:33:46.912719 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t2gnl"] Feb 02 11:33:46 crc kubenswrapper[4925]: I0202 11:33:46.921802 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-t26jf"] Feb 02 11:33:46 crc kubenswrapper[4925]: I0202 11:33:46.930597 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg"] Feb 02 11:33:46 crc kubenswrapper[4925]: I0202 11:33:46.936646 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48"] Feb 02 11:33:46 crc kubenswrapper[4925]: I0202 11:33:46.942608 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-scg48"] Feb 02 11:33:46 crc kubenswrapper[4925]: I0202 11:33:46.948227 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9"] Feb 02 11:33:46 crc kubenswrapper[4925]: I0202 11:33:46.968984 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nvzd9"] Feb 02 11:33:46 crc kubenswrapper[4925]: I0202 11:33:46.974498 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf"] Feb 02 11:33:46 crc kubenswrapper[4925]: I0202 11:33:46.986813 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-h5hp9"] Feb 02 11:33:46 crc kubenswrapper[4925]: I0202 11:33:46.996470 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twffg"] Feb 02 11:33:47 crc kubenswrapper[4925]: I0202 11:33:47.003278 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8"] Feb 02 11:33:47 crc kubenswrapper[4925]: I0202 11:33:47.009506 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs"] Feb 02 11:33:47 crc kubenswrapper[4925]: I0202 11:33:47.016011 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w"] Feb 02 11:33:47 crc kubenswrapper[4925]: I0202 11:33:47.021277 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s9mrf"] Feb 02 11:33:47 crc kubenswrapper[4925]: I0202 11:33:47.026450 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcbcs"] Feb 02 11:33:47 crc kubenswrapper[4925]: I0202 11:33:47.031895 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-h5hp9"] Feb 02 11:33:47 crc kubenswrapper[4925]: I0202 11:33:47.037152 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-8gt2w"] Feb 02 11:33:47 crc kubenswrapper[4925]: I0202 11:33:47.042916 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h2jd8"] Feb 02 11:33:48 crc kubenswrapper[4925]: I0202 11:33:48.680902 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d60fb3-cbb9-4272-b6e8-9d31d0124e49" path="/var/lib/kubelet/pods/24d60fb3-cbb9-4272-b6e8-9d31d0124e49/volumes" Feb 02 11:33:48 crc kubenswrapper[4925]: I0202 11:33:48.682875 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="365118e6-4d8c-42c2-8880-f4fd3ec28561" path="/var/lib/kubelet/pods/365118e6-4d8c-42c2-8880-f4fd3ec28561/volumes" Feb 02 11:33:48 crc kubenswrapper[4925]: I0202 11:33:48.684239 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d93fef7-4ec8-456f-a138-40b0175ce0ce" path="/var/lib/kubelet/pods/6d93fef7-4ec8-456f-a138-40b0175ce0ce/volumes" Feb 02 11:33:48 crc kubenswrapper[4925]: I0202 11:33:48.685406 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="987e20ae-ea4c-4754-9bd2-9dcb4fda76ae" path="/var/lib/kubelet/pods/987e20ae-ea4c-4754-9bd2-9dcb4fda76ae/volumes" Feb 02 11:33:48 crc kubenswrapper[4925]: I0202 11:33:48.687284 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf98756-eb42-441f-ac98-d877ebd79a9a" path="/var/lib/kubelet/pods/bdf98756-eb42-441f-ac98-d877ebd79a9a/volumes" Feb 02 11:33:48 crc kubenswrapper[4925]: I0202 11:33:48.688110 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da8b04f3-a31b-4ce5-b798-73bb52adb2bb" path="/var/lib/kubelet/pods/da8b04f3-a31b-4ce5-b798-73bb52adb2bb/volumes" Feb 02 11:33:48 crc kubenswrapper[4925]: I0202 11:33:48.688832 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd48f74f-90ff-4eee-bc12-cc30de87d165" path="/var/lib/kubelet/pods/dd48f74f-90ff-4eee-bc12-cc30de87d165/volumes" Feb 02 11:33:48 crc kubenswrapper[4925]: I0202 11:33:48.689849 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e14ed962-961f-47b9-8694-880007d9538f" path="/var/lib/kubelet/pods/e14ed962-961f-47b9-8694-880007d9538f/volumes" Feb 02 11:33:48 crc kubenswrapper[4925]: I0202 11:33:48.690445 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9f63e74-c179-41bc-a05e-8a374a9710b7" path="/var/lib/kubelet/pods/e9f63e74-c179-41bc-a05e-8a374a9710b7/volumes" Feb 02 11:33:48 crc kubenswrapper[4925]: I0202 11:33:48.691038 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fad6a60e-a28e-4942-8fa2-6cceb8e1b146" path="/var/lib/kubelet/pods/fad6a60e-a28e-4942-8fa2-6cceb8e1b146/volumes" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.268492 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z5hww"] Feb 02 11:33:52 crc kubenswrapper[4925]: E0202 11:33:52.269590 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796f0552-3263-4aa8-ae0e-3066641a789e" containerName="extract-content" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.269608 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="796f0552-3263-4aa8-ae0e-3066641a789e" containerName="extract-content" Feb 02 11:33:52 crc kubenswrapper[4925]: E0202 11:33:52.269637 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796f0552-3263-4aa8-ae0e-3066641a789e" containerName="extract-utilities" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.269646 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="796f0552-3263-4aa8-ae0e-3066641a789e" containerName="extract-utilities" Feb 02 11:33:52 crc kubenswrapper[4925]: E0202 11:33:52.269669 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796f0552-3263-4aa8-ae0e-3066641a789e" containerName="registry-server" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.269678 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="796f0552-3263-4aa8-ae0e-3066641a789e" containerName="registry-server" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.269909 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="796f0552-3263-4aa8-ae0e-3066641a789e" containerName="registry-server" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.271474 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5hww" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.276917 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5hww"] Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.284600 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9519410d-db71-458b-9d73-ce1be203ee2e-utilities\") pod \"community-operators-z5hww\" (UID: \"9519410d-db71-458b-9d73-ce1be203ee2e\") " pod="openshift-marketplace/community-operators-z5hww" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.284695 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8r47\" (UniqueName: \"kubernetes.io/projected/9519410d-db71-458b-9d73-ce1be203ee2e-kube-api-access-w8r47\") pod \"community-operators-z5hww\" (UID: \"9519410d-db71-458b-9d73-ce1be203ee2e\") " pod="openshift-marketplace/community-operators-z5hww" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.284728 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9519410d-db71-458b-9d73-ce1be203ee2e-catalog-content\") pod \"community-operators-z5hww\" (UID: \"9519410d-db71-458b-9d73-ce1be203ee2e\") " pod="openshift-marketplace/community-operators-z5hww" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.369375 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q"] Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.370461 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.376509 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.378269 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.379902 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.379962 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.379914 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.386733 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9519410d-db71-458b-9d73-ce1be203ee2e-utilities\") pod \"community-operators-z5hww\" (UID: \"9519410d-db71-458b-9d73-ce1be203ee2e\") " pod="openshift-marketplace/community-operators-z5hww" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.386796 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwk7t\" (UniqueName: \"kubernetes.io/projected/92c7fc53-ac73-4641-90de-b290231ea6a9-kube-api-access-fwk7t\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.386845 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8r47\" (UniqueName: \"kubernetes.io/projected/9519410d-db71-458b-9d73-ce1be203ee2e-kube-api-access-w8r47\") pod \"community-operators-z5hww\" (UID: \"9519410d-db71-458b-9d73-ce1be203ee2e\") " pod="openshift-marketplace/community-operators-z5hww" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.386882 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.386917 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9519410d-db71-458b-9d73-ce1be203ee2e-catalog-content\") pod \"community-operators-z5hww\" (UID: \"9519410d-db71-458b-9d73-ce1be203ee2e\") " pod="openshift-marketplace/community-operators-z5hww" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.386965 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.386993 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.387030 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.387170 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9519410d-db71-458b-9d73-ce1be203ee2e-utilities\") pod \"community-operators-z5hww\" (UID: \"9519410d-db71-458b-9d73-ce1be203ee2e\") " pod="openshift-marketplace/community-operators-z5hww" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.387564 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9519410d-db71-458b-9d73-ce1be203ee2e-catalog-content\") pod \"community-operators-z5hww\" (UID: \"9519410d-db71-458b-9d73-ce1be203ee2e\") " pod="openshift-marketplace/community-operators-z5hww" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.396890 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q"] Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.413780 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8r47\" (UniqueName: \"kubernetes.io/projected/9519410d-db71-458b-9d73-ce1be203ee2e-kube-api-access-w8r47\") pod \"community-operators-z5hww\" (UID: \"9519410d-db71-458b-9d73-ce1be203ee2e\") " pod="openshift-marketplace/community-operators-z5hww" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.488032 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.488124 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.488171 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.488319 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwk7t\" (UniqueName: \"kubernetes.io/projected/92c7fc53-ac73-4641-90de-b290231ea6a9-kube-api-access-fwk7t\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.488794 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.491832 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.492292 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.493467 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.497150 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.507983 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwk7t\" (UniqueName: \"kubernetes.io/projected/92c7fc53-ac73-4641-90de-b290231ea6a9-kube-api-access-fwk7t\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.596772 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5hww" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.686839 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:33:52 crc kubenswrapper[4925]: I0202 11:33:52.921168 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5hww"] Feb 02 11:33:53 crc kubenswrapper[4925]: I0202 11:33:53.322698 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q"] Feb 02 11:33:53 crc kubenswrapper[4925]: W0202 11:33:53.330848 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92c7fc53_ac73_4641_90de_b290231ea6a9.slice/crio-75b042250a2c9c02f94f0165164189a055da3f12af6db0d47b7cbce11e6815b5 WatchSource:0}: Error finding container 75b042250a2c9c02f94f0165164189a055da3f12af6db0d47b7cbce11e6815b5: Status 404 returned error can't find the container with id 75b042250a2c9c02f94f0165164189a055da3f12af6db0d47b7cbce11e6815b5 Feb 02 11:33:53 crc kubenswrapper[4925]: I0202 11:33:53.338148 4925 generic.go:334] "Generic (PLEG): container finished" podID="9519410d-db71-458b-9d73-ce1be203ee2e" containerID="3f5ecb382fd9337afb583a2f53403ad8a40e0402723295242fb53309b2b4ad98" exitCode=0 Feb 02 11:33:53 crc kubenswrapper[4925]: I0202 11:33:53.338187 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5hww" event={"ID":"9519410d-db71-458b-9d73-ce1be203ee2e","Type":"ContainerDied","Data":"3f5ecb382fd9337afb583a2f53403ad8a40e0402723295242fb53309b2b4ad98"} Feb 02 11:33:53 crc kubenswrapper[4925]: I0202 11:33:53.338212 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5hww" event={"ID":"9519410d-db71-458b-9d73-ce1be203ee2e","Type":"ContainerStarted","Data":"4890e1696f11005c675e94cf5d7237a69cb500eb9c5bf3231cb47bc3784dcd4f"} Feb 02 11:33:54 crc kubenswrapper[4925]: I0202 11:33:54.355376 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" event={"ID":"92c7fc53-ac73-4641-90de-b290231ea6a9","Type":"ContainerStarted","Data":"ee3f419453fde10672bc6500e3a01e6c56e8bc1f286541b4f4b7b4f311e919b3"} Feb 02 11:33:54 crc kubenswrapper[4925]: I0202 11:33:54.355762 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" event={"ID":"92c7fc53-ac73-4641-90de-b290231ea6a9","Type":"ContainerStarted","Data":"75b042250a2c9c02f94f0165164189a055da3f12af6db0d47b7cbce11e6815b5"} Feb 02 11:33:54 crc kubenswrapper[4925]: I0202 11:33:54.374752 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" podStartSLOduration=1.911044185 podStartE2EDuration="2.374728124s" podCreationTimestamp="2026-02-02 11:33:52 +0000 UTC" firstStartedPulling="2026-02-02 11:33:53.33356216 +0000 UTC m=+2210.337811112" lastFinishedPulling="2026-02-02 11:33:53.797246089 +0000 UTC m=+2210.801495051" observedRunningTime="2026-02-02 11:33:54.369139792 +0000 UTC m=+2211.373388774" watchObservedRunningTime="2026-02-02 11:33:54.374728124 +0000 UTC m=+2211.378977086" Feb 02 11:33:55 crc kubenswrapper[4925]: I0202 11:33:55.368362 4925 generic.go:334] "Generic (PLEG): container finished" podID="9519410d-db71-458b-9d73-ce1be203ee2e" containerID="c592b7919d566d480e6819a932cad9e073f9aab1a40af534307cf1e29984313d" exitCode=0 Feb 02 11:33:55 crc kubenswrapper[4925]: I0202 11:33:55.368498 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5hww" event={"ID":"9519410d-db71-458b-9d73-ce1be203ee2e","Type":"ContainerDied","Data":"c592b7919d566d480e6819a932cad9e073f9aab1a40af534307cf1e29984313d"} Feb 02 11:33:56 crc kubenswrapper[4925]: I0202 11:33:56.381585 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5hww" event={"ID":"9519410d-db71-458b-9d73-ce1be203ee2e","Type":"ContainerStarted","Data":"f6c569a4489f9f0d038809a0694f7444f7a7fef3686ae340a1c155e46d289e06"} Feb 02 11:33:57 crc kubenswrapper[4925]: I0202 11:33:57.413032 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z5hww" podStartSLOduration=2.823247482 podStartE2EDuration="5.413014306s" podCreationTimestamp="2026-02-02 11:33:52 +0000 UTC" firstStartedPulling="2026-02-02 11:33:53.346386887 +0000 UTC m=+2210.350635849" lastFinishedPulling="2026-02-02 11:33:55.936153711 +0000 UTC m=+2212.940402673" observedRunningTime="2026-02-02 11:33:57.406848128 +0000 UTC m=+2214.411097090" watchObservedRunningTime="2026-02-02 11:33:57.413014306 +0000 UTC m=+2214.417263268" Feb 02 11:34:02 crc kubenswrapper[4925]: I0202 11:34:02.597506 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z5hww" Feb 02 11:34:02 crc kubenswrapper[4925]: I0202 11:34:02.598110 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z5hww" Feb 02 11:34:02 crc kubenswrapper[4925]: I0202 11:34:02.650774 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z5hww" Feb 02 11:34:03 crc kubenswrapper[4925]: I0202 11:34:03.495467 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z5hww" Feb 02 11:34:04 crc kubenswrapper[4925]: I0202 11:34:04.041418 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z5hww"] Feb 02 11:34:05 crc kubenswrapper[4925]: I0202 11:34:05.458963 4925 generic.go:334] "Generic (PLEG): container finished" podID="92c7fc53-ac73-4641-90de-b290231ea6a9" containerID="ee3f419453fde10672bc6500e3a01e6c56e8bc1f286541b4f4b7b4f311e919b3" exitCode=0 Feb 02 11:34:05 crc kubenswrapper[4925]: I0202 11:34:05.459062 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" event={"ID":"92c7fc53-ac73-4641-90de-b290231ea6a9","Type":"ContainerDied","Data":"ee3f419453fde10672bc6500e3a01e6c56e8bc1f286541b4f4b7b4f311e919b3"} Feb 02 11:34:05 crc kubenswrapper[4925]: I0202 11:34:05.459520 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z5hww" podUID="9519410d-db71-458b-9d73-ce1be203ee2e" containerName="registry-server" containerID="cri-o://f6c569a4489f9f0d038809a0694f7444f7a7fef3686ae340a1c155e46d289e06" gracePeriod=2 Feb 02 11:34:05 crc kubenswrapper[4925]: I0202 11:34:05.878785 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5hww" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.065295 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8r47\" (UniqueName: \"kubernetes.io/projected/9519410d-db71-458b-9d73-ce1be203ee2e-kube-api-access-w8r47\") pod \"9519410d-db71-458b-9d73-ce1be203ee2e\" (UID: \"9519410d-db71-458b-9d73-ce1be203ee2e\") " Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.065714 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9519410d-db71-458b-9d73-ce1be203ee2e-catalog-content\") pod \"9519410d-db71-458b-9d73-ce1be203ee2e\" (UID: \"9519410d-db71-458b-9d73-ce1be203ee2e\") " Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.065757 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9519410d-db71-458b-9d73-ce1be203ee2e-utilities\") pod \"9519410d-db71-458b-9d73-ce1be203ee2e\" (UID: \"9519410d-db71-458b-9d73-ce1be203ee2e\") " Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.066693 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9519410d-db71-458b-9d73-ce1be203ee2e-utilities" (OuterVolumeSpecName: "utilities") pod "9519410d-db71-458b-9d73-ce1be203ee2e" (UID: "9519410d-db71-458b-9d73-ce1be203ee2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.080870 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9519410d-db71-458b-9d73-ce1be203ee2e-kube-api-access-w8r47" (OuterVolumeSpecName: "kube-api-access-w8r47") pod "9519410d-db71-458b-9d73-ce1be203ee2e" (UID: "9519410d-db71-458b-9d73-ce1be203ee2e"). InnerVolumeSpecName "kube-api-access-w8r47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.115457 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9519410d-db71-458b-9d73-ce1be203ee2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9519410d-db71-458b-9d73-ce1be203ee2e" (UID: "9519410d-db71-458b-9d73-ce1be203ee2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.168241 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8r47\" (UniqueName: \"kubernetes.io/projected/9519410d-db71-458b-9d73-ce1be203ee2e-kube-api-access-w8r47\") on node \"crc\" DevicePath \"\"" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.168291 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9519410d-db71-458b-9d73-ce1be203ee2e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.168303 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9519410d-db71-458b-9d73-ce1be203ee2e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.475300 4925 generic.go:334] "Generic (PLEG): container finished" podID="9519410d-db71-458b-9d73-ce1be203ee2e" containerID="f6c569a4489f9f0d038809a0694f7444f7a7fef3686ae340a1c155e46d289e06" exitCode=0 Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.475444 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5hww" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.475448 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5hww" event={"ID":"9519410d-db71-458b-9d73-ce1be203ee2e","Type":"ContainerDied","Data":"f6c569a4489f9f0d038809a0694f7444f7a7fef3686ae340a1c155e46d289e06"} Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.475551 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5hww" event={"ID":"9519410d-db71-458b-9d73-ce1be203ee2e","Type":"ContainerDied","Data":"4890e1696f11005c675e94cf5d7237a69cb500eb9c5bf3231cb47bc3784dcd4f"} Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.475590 4925 scope.go:117] "RemoveContainer" containerID="f6c569a4489f9f0d038809a0694f7444f7a7fef3686ae340a1c155e46d289e06" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.520958 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z5hww"] Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.522679 4925 scope.go:117] "RemoveContainer" containerID="c592b7919d566d480e6819a932cad9e073f9aab1a40af534307cf1e29984313d" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.527853 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z5hww"] Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.565635 4925 scope.go:117] "RemoveContainer" containerID="3f5ecb382fd9337afb583a2f53403ad8a40e0402723295242fb53309b2b4ad98" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.593091 4925 scope.go:117] "RemoveContainer" containerID="f6c569a4489f9f0d038809a0694f7444f7a7fef3686ae340a1c155e46d289e06" Feb 02 11:34:06 crc kubenswrapper[4925]: E0202 11:34:06.595381 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6c569a4489f9f0d038809a0694f7444f7a7fef3686ae340a1c155e46d289e06\": container with ID starting with f6c569a4489f9f0d038809a0694f7444f7a7fef3686ae340a1c155e46d289e06 not found: ID does not exist" containerID="f6c569a4489f9f0d038809a0694f7444f7a7fef3686ae340a1c155e46d289e06" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.595418 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6c569a4489f9f0d038809a0694f7444f7a7fef3686ae340a1c155e46d289e06"} err="failed to get container status \"f6c569a4489f9f0d038809a0694f7444f7a7fef3686ae340a1c155e46d289e06\": rpc error: code = NotFound desc = could not find container \"f6c569a4489f9f0d038809a0694f7444f7a7fef3686ae340a1c155e46d289e06\": container with ID starting with f6c569a4489f9f0d038809a0694f7444f7a7fef3686ae340a1c155e46d289e06 not found: ID does not exist" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.595440 4925 scope.go:117] "RemoveContainer" containerID="c592b7919d566d480e6819a932cad9e073f9aab1a40af534307cf1e29984313d" Feb 02 11:34:06 crc kubenswrapper[4925]: E0202 11:34:06.595735 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c592b7919d566d480e6819a932cad9e073f9aab1a40af534307cf1e29984313d\": container with ID starting with c592b7919d566d480e6819a932cad9e073f9aab1a40af534307cf1e29984313d not found: ID does not exist" containerID="c592b7919d566d480e6819a932cad9e073f9aab1a40af534307cf1e29984313d" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.595766 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c592b7919d566d480e6819a932cad9e073f9aab1a40af534307cf1e29984313d"} err="failed to get container status \"c592b7919d566d480e6819a932cad9e073f9aab1a40af534307cf1e29984313d\": rpc error: code = NotFound desc = could not find container \"c592b7919d566d480e6819a932cad9e073f9aab1a40af534307cf1e29984313d\": container with ID starting with c592b7919d566d480e6819a932cad9e073f9aab1a40af534307cf1e29984313d not found: ID does not exist" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.595784 4925 scope.go:117] "RemoveContainer" containerID="3f5ecb382fd9337afb583a2f53403ad8a40e0402723295242fb53309b2b4ad98" Feb 02 11:34:06 crc kubenswrapper[4925]: E0202 11:34:06.596016 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f5ecb382fd9337afb583a2f53403ad8a40e0402723295242fb53309b2b4ad98\": container with ID starting with 3f5ecb382fd9337afb583a2f53403ad8a40e0402723295242fb53309b2b4ad98 not found: ID does not exist" containerID="3f5ecb382fd9337afb583a2f53403ad8a40e0402723295242fb53309b2b4ad98" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.596044 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f5ecb382fd9337afb583a2f53403ad8a40e0402723295242fb53309b2b4ad98"} err="failed to get container status \"3f5ecb382fd9337afb583a2f53403ad8a40e0402723295242fb53309b2b4ad98\": rpc error: code = NotFound desc = could not find container \"3f5ecb382fd9337afb583a2f53403ad8a40e0402723295242fb53309b2b4ad98\": container with ID starting with 3f5ecb382fd9337afb583a2f53403ad8a40e0402723295242fb53309b2b4ad98 not found: ID does not exist" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.676036 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9519410d-db71-458b-9d73-ce1be203ee2e" path="/var/lib/kubelet/pods/9519410d-db71-458b-9d73-ce1be203ee2e/volumes" Feb 02 11:34:06 crc kubenswrapper[4925]: I0202 11:34:06.886652 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.084592 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-repo-setup-combined-ca-bundle\") pod \"92c7fc53-ac73-4641-90de-b290231ea6a9\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.084958 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-inventory\") pod \"92c7fc53-ac73-4641-90de-b290231ea6a9\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.085017 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-ssh-key-openstack-edpm-ipam\") pod \"92c7fc53-ac73-4641-90de-b290231ea6a9\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.085055 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwk7t\" (UniqueName: \"kubernetes.io/projected/92c7fc53-ac73-4641-90de-b290231ea6a9-kube-api-access-fwk7t\") pod \"92c7fc53-ac73-4641-90de-b290231ea6a9\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.085177 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-ceph\") pod \"92c7fc53-ac73-4641-90de-b290231ea6a9\" (UID: \"92c7fc53-ac73-4641-90de-b290231ea6a9\") " Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.088998 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c7fc53-ac73-4641-90de-b290231ea6a9-kube-api-access-fwk7t" (OuterVolumeSpecName: "kube-api-access-fwk7t") pod "92c7fc53-ac73-4641-90de-b290231ea6a9" (UID: "92c7fc53-ac73-4641-90de-b290231ea6a9"). InnerVolumeSpecName "kube-api-access-fwk7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.089230 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-ceph" (OuterVolumeSpecName: "ceph") pod "92c7fc53-ac73-4641-90de-b290231ea6a9" (UID: "92c7fc53-ac73-4641-90de-b290231ea6a9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.089671 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "92c7fc53-ac73-4641-90de-b290231ea6a9" (UID: "92c7fc53-ac73-4641-90de-b290231ea6a9"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.112270 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "92c7fc53-ac73-4641-90de-b290231ea6a9" (UID: "92c7fc53-ac73-4641-90de-b290231ea6a9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.115263 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-inventory" (OuterVolumeSpecName: "inventory") pod "92c7fc53-ac73-4641-90de-b290231ea6a9" (UID: "92c7fc53-ac73-4641-90de-b290231ea6a9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.189317 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.189395 4925 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.189436 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.189453 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c7fc53-ac73-4641-90de-b290231ea6a9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.189468 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwk7t\" (UniqueName: \"kubernetes.io/projected/92c7fc53-ac73-4641-90de-b290231ea6a9-kube-api-access-fwk7t\") on node \"crc\" DevicePath \"\"" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.485321 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" event={"ID":"92c7fc53-ac73-4641-90de-b290231ea6a9","Type":"ContainerDied","Data":"75b042250a2c9c02f94f0165164189a055da3f12af6db0d47b7cbce11e6815b5"} Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.485363 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75b042250a2c9c02f94f0165164189a055da3f12af6db0d47b7cbce11e6815b5" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.485428 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.561558 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w"] Feb 02 11:34:07 crc kubenswrapper[4925]: E0202 11:34:07.561966 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c7fc53-ac73-4641-90de-b290231ea6a9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.561989 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c7fc53-ac73-4641-90de-b290231ea6a9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:34:07 crc kubenswrapper[4925]: E0202 11:34:07.562027 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9519410d-db71-458b-9d73-ce1be203ee2e" containerName="extract-utilities" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.562036 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="9519410d-db71-458b-9d73-ce1be203ee2e" containerName="extract-utilities" Feb 02 11:34:07 crc kubenswrapper[4925]: E0202 11:34:07.562054 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9519410d-db71-458b-9d73-ce1be203ee2e" containerName="registry-server" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.562063 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="9519410d-db71-458b-9d73-ce1be203ee2e" containerName="registry-server" Feb 02 11:34:07 crc kubenswrapper[4925]: E0202 11:34:07.562088 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9519410d-db71-458b-9d73-ce1be203ee2e" containerName="extract-content" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.562097 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="9519410d-db71-458b-9d73-ce1be203ee2e" containerName="extract-content" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.562297 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c7fc53-ac73-4641-90de-b290231ea6a9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.562315 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="9519410d-db71-458b-9d73-ce1be203ee2e" containerName="registry-server" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.563052 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.568347 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.568552 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.568716 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.569064 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.569135 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.576961 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w"] Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.598921 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.598975 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.599014 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.599059 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.599164 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjmlc\" (UniqueName: \"kubernetes.io/projected/4a342fe3-c33f-4a54-a59f-9bba07acc904-kube-api-access-pjmlc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.700499 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.700561 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.700599 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.700624 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.700651 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjmlc\" (UniqueName: \"kubernetes.io/projected/4a342fe3-c33f-4a54-a59f-9bba07acc904-kube-api-access-pjmlc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.705048 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.705504 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.706129 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.706614 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.717684 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjmlc\" (UniqueName: \"kubernetes.io/projected/4a342fe3-c33f-4a54-a59f-9bba07acc904-kube-api-access-pjmlc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:34:07 crc kubenswrapper[4925]: I0202 11:34:07.885924 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:34:08 crc kubenswrapper[4925]: I0202 11:34:08.395931 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w"] Feb 02 11:34:08 crc kubenswrapper[4925]: I0202 11:34:08.498507 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" event={"ID":"4a342fe3-c33f-4a54-a59f-9bba07acc904","Type":"ContainerStarted","Data":"d3d48b485985650e33e58dc953cde299aae499ebad9609a217883ff9486eeb93"} Feb 02 11:34:09 crc kubenswrapper[4925]: I0202 11:34:09.512810 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" event={"ID":"4a342fe3-c33f-4a54-a59f-9bba07acc904","Type":"ContainerStarted","Data":"cc326063742d732de400323e205291a162d9bc7108ee12f2664e30d4f3ff4363"} Feb 02 11:34:09 crc kubenswrapper[4925]: I0202 11:34:09.533919 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" podStartSLOduration=2.095578058 podStartE2EDuration="2.53389116s" podCreationTimestamp="2026-02-02 11:34:07 +0000 UTC" firstStartedPulling="2026-02-02 11:34:08.398715497 +0000 UTC m=+2225.402964459" lastFinishedPulling="2026-02-02 11:34:08.837028599 +0000 UTC m=+2225.841277561" observedRunningTime="2026-02-02 11:34:09.528152084 +0000 UTC m=+2226.532401056" watchObservedRunningTime="2026-02-02 11:34:09.53389116 +0000 UTC m=+2226.538140122" Feb 02 11:34:28 crc kubenswrapper[4925]: I0202 11:34:28.847289 4925 scope.go:117] "RemoveContainer" containerID="dfdbd393999fcbb5136339f541a2102380216c7c05b9c0bcc026b3bd82abceee" Feb 02 11:34:28 crc kubenswrapper[4925]: I0202 11:34:28.874183 4925 scope.go:117] "RemoveContainer" containerID="8f809de8a2f1bff36f015f9f78a2d29ea2e088cd9d0c6e5d2f691a420fa189f7" Feb 02 11:34:28 crc kubenswrapper[4925]: I0202 11:34:28.934644 4925 scope.go:117] "RemoveContainer" containerID="48b033738c2f9271d217b8a100fbde866a7a91bfccbfba56b6f9212a13e3996d" Feb 02 11:34:29 crc kubenswrapper[4925]: I0202 11:34:29.004972 4925 scope.go:117] "RemoveContainer" containerID="c5190dcc7a541c402fb0b9ca1e02468e1c76dac0917fc01cca7ae56ac3d0af4a" Feb 02 11:34:29 crc kubenswrapper[4925]: I0202 11:34:29.048013 4925 scope.go:117] "RemoveContainer" containerID="4b82a9cc75b1b621942c6328c7ad5136df62b758c4bd5df9f1c719b40acea912" Feb 02 11:34:29 crc kubenswrapper[4925]: I0202 11:34:29.069593 4925 scope.go:117] "RemoveContainer" containerID="ca1a9e29b145438a862ed167e666138a4d2e16822ca661995ed7edd10a07469e" Feb 02 11:34:29 crc kubenswrapper[4925]: I0202 11:34:29.140683 4925 scope.go:117] "RemoveContainer" containerID="99dca2ab3bd137e6df78f9199328230df4fb85ab96970a902ad9f06da21417e1" Feb 02 11:35:13 crc kubenswrapper[4925]: I0202 11:35:13.399222 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:35:13 crc kubenswrapper[4925]: I0202 11:35:13.399835 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:35:29 crc kubenswrapper[4925]: I0202 11:35:29.278877 4925 scope.go:117] "RemoveContainer" containerID="9c8984bec5a25dcbef71beed5be03dcd4d44d475ef64a72d759f16f78a49d3da" Feb 02 11:35:29 crc kubenswrapper[4925]: I0202 11:35:29.309105 4925 scope.go:117] "RemoveContainer" containerID="3f228565c4fe0253647075c0d42779d0037ce915700b10d0a07a03b8112cebe3" Feb 02 11:35:43 crc kubenswrapper[4925]: I0202 11:35:43.274463 4925 generic.go:334] "Generic (PLEG): container finished" podID="4a342fe3-c33f-4a54-a59f-9bba07acc904" containerID="cc326063742d732de400323e205291a162d9bc7108ee12f2664e30d4f3ff4363" exitCode=0 Feb 02 11:35:43 crc kubenswrapper[4925]: I0202 11:35:43.274597 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" event={"ID":"4a342fe3-c33f-4a54-a59f-9bba07acc904","Type":"ContainerDied","Data":"cc326063742d732de400323e205291a162d9bc7108ee12f2664e30d4f3ff4363"} Feb 02 11:35:43 crc kubenswrapper[4925]: I0202 11:35:43.398857 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:35:43 crc kubenswrapper[4925]: I0202 11:35:43.398920 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:35:44 crc kubenswrapper[4925]: I0202 11:35:44.790905 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:35:44 crc kubenswrapper[4925]: I0202 11:35:44.918741 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-ceph\") pod \"4a342fe3-c33f-4a54-a59f-9bba07acc904\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " Feb 02 11:35:44 crc kubenswrapper[4925]: I0202 11:35:44.918820 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjmlc\" (UniqueName: \"kubernetes.io/projected/4a342fe3-c33f-4a54-a59f-9bba07acc904-kube-api-access-pjmlc\") pod \"4a342fe3-c33f-4a54-a59f-9bba07acc904\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " Feb 02 11:35:44 crc kubenswrapper[4925]: I0202 11:35:44.918917 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-ssh-key-openstack-edpm-ipam\") pod \"4a342fe3-c33f-4a54-a59f-9bba07acc904\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " Feb 02 11:35:44 crc kubenswrapper[4925]: I0202 11:35:44.918944 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-inventory\") pod \"4a342fe3-c33f-4a54-a59f-9bba07acc904\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " Feb 02 11:35:44 crc kubenswrapper[4925]: I0202 11:35:44.918990 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-bootstrap-combined-ca-bundle\") pod \"4a342fe3-c33f-4a54-a59f-9bba07acc904\" (UID: \"4a342fe3-c33f-4a54-a59f-9bba07acc904\") " Feb 02 11:35:44 crc kubenswrapper[4925]: I0202 11:35:44.925315 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-ceph" (OuterVolumeSpecName: "ceph") pod "4a342fe3-c33f-4a54-a59f-9bba07acc904" (UID: "4a342fe3-c33f-4a54-a59f-9bba07acc904"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:35:44 crc kubenswrapper[4925]: I0202 11:35:44.926009 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a342fe3-c33f-4a54-a59f-9bba07acc904-kube-api-access-pjmlc" (OuterVolumeSpecName: "kube-api-access-pjmlc") pod "4a342fe3-c33f-4a54-a59f-9bba07acc904" (UID: "4a342fe3-c33f-4a54-a59f-9bba07acc904"). InnerVolumeSpecName "kube-api-access-pjmlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:35:44 crc kubenswrapper[4925]: I0202 11:35:44.927210 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4a342fe3-c33f-4a54-a59f-9bba07acc904" (UID: "4a342fe3-c33f-4a54-a59f-9bba07acc904"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:35:44 crc kubenswrapper[4925]: I0202 11:35:44.947278 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-inventory" (OuterVolumeSpecName: "inventory") pod "4a342fe3-c33f-4a54-a59f-9bba07acc904" (UID: "4a342fe3-c33f-4a54-a59f-9bba07acc904"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:35:44 crc kubenswrapper[4925]: I0202 11:35:44.955174 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4a342fe3-c33f-4a54-a59f-9bba07acc904" (UID: "4a342fe3-c33f-4a54-a59f-9bba07acc904"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.021662 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.021702 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.021715 4925 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.021726 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a342fe3-c33f-4a54-a59f-9bba07acc904-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.021737 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjmlc\" (UniqueName: \"kubernetes.io/projected/4a342fe3-c33f-4a54-a59f-9bba07acc904-kube-api-access-pjmlc\") on node \"crc\" DevicePath \"\"" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.292822 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" event={"ID":"4a342fe3-c33f-4a54-a59f-9bba07acc904","Type":"ContainerDied","Data":"d3d48b485985650e33e58dc953cde299aae499ebad9609a217883ff9486eeb93"} Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.292871 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3d48b485985650e33e58dc953cde299aae499ebad9609a217883ff9486eeb93" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.293002 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.410731 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg"] Feb 02 11:35:45 crc kubenswrapper[4925]: E0202 11:35:45.411213 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a342fe3-c33f-4a54-a59f-9bba07acc904" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.411241 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a342fe3-c33f-4a54-a59f-9bba07acc904" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.411452 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a342fe3-c33f-4a54-a59f-9bba07acc904" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.412216 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.416679 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.416935 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.416814 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.417131 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.417226 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.421424 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg"] Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.532607 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg\" (UID: \"34087aed-542d-424c-a71e-a277cf32d94c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.532708 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29zhh\" (UniqueName: \"kubernetes.io/projected/34087aed-542d-424c-a71e-a277cf32d94c-kube-api-access-29zhh\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg\" (UID: \"34087aed-542d-424c-a71e-a277cf32d94c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.532743 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg\" (UID: \"34087aed-542d-424c-a71e-a277cf32d94c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.532872 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg\" (UID: \"34087aed-542d-424c-a71e-a277cf32d94c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.633977 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29zhh\" (UniqueName: \"kubernetes.io/projected/34087aed-542d-424c-a71e-a277cf32d94c-kube-api-access-29zhh\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg\" (UID: \"34087aed-542d-424c-a71e-a277cf32d94c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.634029 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg\" (UID: \"34087aed-542d-424c-a71e-a277cf32d94c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.634128 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg\" (UID: \"34087aed-542d-424c-a71e-a277cf32d94c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.634157 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg\" (UID: \"34087aed-542d-424c-a71e-a277cf32d94c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.640702 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg\" (UID: \"34087aed-542d-424c-a71e-a277cf32d94c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.642269 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg\" (UID: \"34087aed-542d-424c-a71e-a277cf32d94c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.643223 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg\" (UID: \"34087aed-542d-424c-a71e-a277cf32d94c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.657004 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29zhh\" (UniqueName: \"kubernetes.io/projected/34087aed-542d-424c-a71e-a277cf32d94c-kube-api-access-29zhh\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg\" (UID: \"34087aed-542d-424c-a71e-a277cf32d94c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" Feb 02 11:35:45 crc kubenswrapper[4925]: I0202 11:35:45.727360 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" Feb 02 11:35:46 crc kubenswrapper[4925]: I0202 11:35:46.230014 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg"] Feb 02 11:35:46 crc kubenswrapper[4925]: I0202 11:35:46.237996 4925 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:35:46 crc kubenswrapper[4925]: I0202 11:35:46.299796 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" event={"ID":"34087aed-542d-424c-a71e-a277cf32d94c","Type":"ContainerStarted","Data":"ab43c6bfb3f911d7c5e8151c0c23840aa99ef143a4f8e49ab24d5a779dcb4b11"} Feb 02 11:35:47 crc kubenswrapper[4925]: I0202 11:35:47.311220 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" event={"ID":"34087aed-542d-424c-a71e-a277cf32d94c","Type":"ContainerStarted","Data":"1116afc42c8dd1915890aedf4a9e90c959b876bec968120b80abd94bb4321e6e"} Feb 02 11:35:47 crc kubenswrapper[4925]: I0202 11:35:47.333423 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" podStartSLOduration=1.7845215269999999 podStartE2EDuration="2.333401796s" podCreationTimestamp="2026-02-02 11:35:45 +0000 UTC" firstStartedPulling="2026-02-02 11:35:46.237766025 +0000 UTC m=+2323.242014987" lastFinishedPulling="2026-02-02 11:35:46.786646294 +0000 UTC m=+2323.790895256" observedRunningTime="2026-02-02 11:35:47.330502097 +0000 UTC m=+2324.334751069" watchObservedRunningTime="2026-02-02 11:35:47.333401796 +0000 UTC m=+2324.337650748" Feb 02 11:35:50 crc kubenswrapper[4925]: I0202 11:35:50.948820 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cd7xl"] Feb 02 11:35:50 crc kubenswrapper[4925]: I0202 11:35:50.952575 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd7xl" Feb 02 11:35:50 crc kubenswrapper[4925]: I0202 11:35:50.975326 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cd7xl"] Feb 02 11:35:51 crc kubenswrapper[4925]: I0202 11:35:51.138982 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgxkc\" (UniqueName: \"kubernetes.io/projected/4d95623f-c40d-4e66-aa98-138d58f43b8c-kube-api-access-zgxkc\") pod \"certified-operators-cd7xl\" (UID: \"4d95623f-c40d-4e66-aa98-138d58f43b8c\") " pod="openshift-marketplace/certified-operators-cd7xl" Feb 02 11:35:51 crc kubenswrapper[4925]: I0202 11:35:51.139224 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d95623f-c40d-4e66-aa98-138d58f43b8c-utilities\") pod \"certified-operators-cd7xl\" (UID: \"4d95623f-c40d-4e66-aa98-138d58f43b8c\") " pod="openshift-marketplace/certified-operators-cd7xl" Feb 02 11:35:51 crc kubenswrapper[4925]: I0202 11:35:51.139368 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d95623f-c40d-4e66-aa98-138d58f43b8c-catalog-content\") pod \"certified-operators-cd7xl\" (UID: \"4d95623f-c40d-4e66-aa98-138d58f43b8c\") " pod="openshift-marketplace/certified-operators-cd7xl" Feb 02 11:35:51 crc kubenswrapper[4925]: I0202 11:35:51.241478 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgxkc\" (UniqueName: \"kubernetes.io/projected/4d95623f-c40d-4e66-aa98-138d58f43b8c-kube-api-access-zgxkc\") pod \"certified-operators-cd7xl\" (UID: \"4d95623f-c40d-4e66-aa98-138d58f43b8c\") " pod="openshift-marketplace/certified-operators-cd7xl" Feb 02 11:35:51 crc kubenswrapper[4925]: I0202 11:35:51.241597 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d95623f-c40d-4e66-aa98-138d58f43b8c-utilities\") pod \"certified-operators-cd7xl\" (UID: \"4d95623f-c40d-4e66-aa98-138d58f43b8c\") " pod="openshift-marketplace/certified-operators-cd7xl" Feb 02 11:35:51 crc kubenswrapper[4925]: I0202 11:35:51.241621 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d95623f-c40d-4e66-aa98-138d58f43b8c-catalog-content\") pod \"certified-operators-cd7xl\" (UID: \"4d95623f-c40d-4e66-aa98-138d58f43b8c\") " pod="openshift-marketplace/certified-operators-cd7xl" Feb 02 11:35:51 crc kubenswrapper[4925]: I0202 11:35:51.242202 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d95623f-c40d-4e66-aa98-138d58f43b8c-catalog-content\") pod \"certified-operators-cd7xl\" (UID: \"4d95623f-c40d-4e66-aa98-138d58f43b8c\") " pod="openshift-marketplace/certified-operators-cd7xl" Feb 02 11:35:51 crc kubenswrapper[4925]: I0202 11:35:51.242433 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d95623f-c40d-4e66-aa98-138d58f43b8c-utilities\") pod \"certified-operators-cd7xl\" (UID: \"4d95623f-c40d-4e66-aa98-138d58f43b8c\") " pod="openshift-marketplace/certified-operators-cd7xl" Feb 02 11:35:51 crc kubenswrapper[4925]: I0202 11:35:51.260943 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgxkc\" (UniqueName: \"kubernetes.io/projected/4d95623f-c40d-4e66-aa98-138d58f43b8c-kube-api-access-zgxkc\") pod \"certified-operators-cd7xl\" (UID: \"4d95623f-c40d-4e66-aa98-138d58f43b8c\") " pod="openshift-marketplace/certified-operators-cd7xl" Feb 02 11:35:51 crc kubenswrapper[4925]: I0202 11:35:51.289947 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd7xl" Feb 02 11:35:51 crc kubenswrapper[4925]: I0202 11:35:51.797812 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cd7xl"] Feb 02 11:35:52 crc kubenswrapper[4925]: I0202 11:35:52.361720 4925 generic.go:334] "Generic (PLEG): container finished" podID="4d95623f-c40d-4e66-aa98-138d58f43b8c" containerID="7d24ab7071c984e0b74dcd321403d65fc508a558dc2ea09dd3e1a0bb36534bf4" exitCode=0 Feb 02 11:35:52 crc kubenswrapper[4925]: I0202 11:35:52.361780 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd7xl" event={"ID":"4d95623f-c40d-4e66-aa98-138d58f43b8c","Type":"ContainerDied","Data":"7d24ab7071c984e0b74dcd321403d65fc508a558dc2ea09dd3e1a0bb36534bf4"} Feb 02 11:35:52 crc kubenswrapper[4925]: I0202 11:35:52.362272 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd7xl" event={"ID":"4d95623f-c40d-4e66-aa98-138d58f43b8c","Type":"ContainerStarted","Data":"4e05dfd1dd2528a07895c72280a5a9b723197fb6bca3c1c492c77e7077c9dfec"} Feb 02 11:35:54 crc kubenswrapper[4925]: I0202 11:35:54.382497 4925 generic.go:334] "Generic (PLEG): container finished" podID="4d95623f-c40d-4e66-aa98-138d58f43b8c" containerID="034747f6ea6fbe323be87d46f5aafddcea84da98e036e12f75dff63cd42d8938" exitCode=0 Feb 02 11:35:54 crc kubenswrapper[4925]: I0202 11:35:54.382579 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd7xl" event={"ID":"4d95623f-c40d-4e66-aa98-138d58f43b8c","Type":"ContainerDied","Data":"034747f6ea6fbe323be87d46f5aafddcea84da98e036e12f75dff63cd42d8938"} Feb 02 11:35:55 crc kubenswrapper[4925]: I0202 11:35:55.393163 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd7xl" event={"ID":"4d95623f-c40d-4e66-aa98-138d58f43b8c","Type":"ContainerStarted","Data":"e0d3bfd516ffb6ba9bafda5065521f2f13400910d22702c2773d406690d8c6d5"} Feb 02 11:35:55 crc kubenswrapper[4925]: I0202 11:35:55.421605 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cd7xl" podStartSLOduration=2.951273097 podStartE2EDuration="5.421581938s" podCreationTimestamp="2026-02-02 11:35:50 +0000 UTC" firstStartedPulling="2026-02-02 11:35:52.364809303 +0000 UTC m=+2329.369058255" lastFinishedPulling="2026-02-02 11:35:54.835118134 +0000 UTC m=+2331.839367096" observedRunningTime="2026-02-02 11:35:55.410565908 +0000 UTC m=+2332.414814890" watchObservedRunningTime="2026-02-02 11:35:55.421581938 +0000 UTC m=+2332.425830900" Feb 02 11:36:01 crc kubenswrapper[4925]: I0202 11:36:01.290358 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cd7xl" Feb 02 11:36:01 crc kubenswrapper[4925]: I0202 11:36:01.290984 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cd7xl" Feb 02 11:36:01 crc kubenswrapper[4925]: I0202 11:36:01.338011 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cd7xl" Feb 02 11:36:01 crc kubenswrapper[4925]: I0202 11:36:01.483453 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cd7xl" Feb 02 11:36:03 crc kubenswrapper[4925]: I0202 11:36:03.338841 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cd7xl"] Feb 02 11:36:03 crc kubenswrapper[4925]: I0202 11:36:03.451102 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cd7xl" podUID="4d95623f-c40d-4e66-aa98-138d58f43b8c" containerName="registry-server" containerID="cri-o://e0d3bfd516ffb6ba9bafda5065521f2f13400910d22702c2773d406690d8c6d5" gracePeriod=2 Feb 02 11:36:03 crc kubenswrapper[4925]: I0202 11:36:03.876649 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd7xl" Feb 02 11:36:03 crc kubenswrapper[4925]: I0202 11:36:03.987038 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d95623f-c40d-4e66-aa98-138d58f43b8c-catalog-content\") pod \"4d95623f-c40d-4e66-aa98-138d58f43b8c\" (UID: \"4d95623f-c40d-4e66-aa98-138d58f43b8c\") " Feb 02 11:36:03 crc kubenswrapper[4925]: I0202 11:36:03.987223 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d95623f-c40d-4e66-aa98-138d58f43b8c-utilities\") pod \"4d95623f-c40d-4e66-aa98-138d58f43b8c\" (UID: \"4d95623f-c40d-4e66-aa98-138d58f43b8c\") " Feb 02 11:36:03 crc kubenswrapper[4925]: I0202 11:36:03.987260 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgxkc\" (UniqueName: \"kubernetes.io/projected/4d95623f-c40d-4e66-aa98-138d58f43b8c-kube-api-access-zgxkc\") pod \"4d95623f-c40d-4e66-aa98-138d58f43b8c\" (UID: \"4d95623f-c40d-4e66-aa98-138d58f43b8c\") " Feb 02 11:36:03 crc kubenswrapper[4925]: I0202 11:36:03.989226 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d95623f-c40d-4e66-aa98-138d58f43b8c-utilities" (OuterVolumeSpecName: "utilities") pod "4d95623f-c40d-4e66-aa98-138d58f43b8c" (UID: "4d95623f-c40d-4e66-aa98-138d58f43b8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:36:03 crc kubenswrapper[4925]: I0202 11:36:03.995296 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d95623f-c40d-4e66-aa98-138d58f43b8c-kube-api-access-zgxkc" (OuterVolumeSpecName: "kube-api-access-zgxkc") pod "4d95623f-c40d-4e66-aa98-138d58f43b8c" (UID: "4d95623f-c40d-4e66-aa98-138d58f43b8c"). InnerVolumeSpecName "kube-api-access-zgxkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.042702 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d95623f-c40d-4e66-aa98-138d58f43b8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d95623f-c40d-4e66-aa98-138d58f43b8c" (UID: "4d95623f-c40d-4e66-aa98-138d58f43b8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.089467 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d95623f-c40d-4e66-aa98-138d58f43b8c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.089509 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d95623f-c40d-4e66-aa98-138d58f43b8c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.089525 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgxkc\" (UniqueName: \"kubernetes.io/projected/4d95623f-c40d-4e66-aa98-138d58f43b8c-kube-api-access-zgxkc\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.465617 4925 generic.go:334] "Generic (PLEG): container finished" podID="4d95623f-c40d-4e66-aa98-138d58f43b8c" containerID="e0d3bfd516ffb6ba9bafda5065521f2f13400910d22702c2773d406690d8c6d5" exitCode=0 Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.465670 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd7xl" event={"ID":"4d95623f-c40d-4e66-aa98-138d58f43b8c","Type":"ContainerDied","Data":"e0d3bfd516ffb6ba9bafda5065521f2f13400910d22702c2773d406690d8c6d5"} Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.465690 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd7xl" Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.465711 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd7xl" event={"ID":"4d95623f-c40d-4e66-aa98-138d58f43b8c","Type":"ContainerDied","Data":"4e05dfd1dd2528a07895c72280a5a9b723197fb6bca3c1c492c77e7077c9dfec"} Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.465735 4925 scope.go:117] "RemoveContainer" containerID="e0d3bfd516ffb6ba9bafda5065521f2f13400910d22702c2773d406690d8c6d5" Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.483600 4925 scope.go:117] "RemoveContainer" containerID="034747f6ea6fbe323be87d46f5aafddcea84da98e036e12f75dff63cd42d8938" Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.500984 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cd7xl"] Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.507112 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cd7xl"] Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.520424 4925 scope.go:117] "RemoveContainer" containerID="7d24ab7071c984e0b74dcd321403d65fc508a558dc2ea09dd3e1a0bb36534bf4" Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.547024 4925 scope.go:117] "RemoveContainer" containerID="e0d3bfd516ffb6ba9bafda5065521f2f13400910d22702c2773d406690d8c6d5" Feb 02 11:36:04 crc kubenswrapper[4925]: E0202 11:36:04.547486 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0d3bfd516ffb6ba9bafda5065521f2f13400910d22702c2773d406690d8c6d5\": container with ID starting with e0d3bfd516ffb6ba9bafda5065521f2f13400910d22702c2773d406690d8c6d5 not found: ID does not exist" containerID="e0d3bfd516ffb6ba9bafda5065521f2f13400910d22702c2773d406690d8c6d5" Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.547549 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0d3bfd516ffb6ba9bafda5065521f2f13400910d22702c2773d406690d8c6d5"} err="failed to get container status \"e0d3bfd516ffb6ba9bafda5065521f2f13400910d22702c2773d406690d8c6d5\": rpc error: code = NotFound desc = could not find container \"e0d3bfd516ffb6ba9bafda5065521f2f13400910d22702c2773d406690d8c6d5\": container with ID starting with e0d3bfd516ffb6ba9bafda5065521f2f13400910d22702c2773d406690d8c6d5 not found: ID does not exist" Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.547585 4925 scope.go:117] "RemoveContainer" containerID="034747f6ea6fbe323be87d46f5aafddcea84da98e036e12f75dff63cd42d8938" Feb 02 11:36:04 crc kubenswrapper[4925]: E0202 11:36:04.547907 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"034747f6ea6fbe323be87d46f5aafddcea84da98e036e12f75dff63cd42d8938\": container with ID starting with 034747f6ea6fbe323be87d46f5aafddcea84da98e036e12f75dff63cd42d8938 not found: ID does not exist" containerID="034747f6ea6fbe323be87d46f5aafddcea84da98e036e12f75dff63cd42d8938" Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.547946 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"034747f6ea6fbe323be87d46f5aafddcea84da98e036e12f75dff63cd42d8938"} err="failed to get container status \"034747f6ea6fbe323be87d46f5aafddcea84da98e036e12f75dff63cd42d8938\": rpc error: code = NotFound desc = could not find container \"034747f6ea6fbe323be87d46f5aafddcea84da98e036e12f75dff63cd42d8938\": container with ID starting with 034747f6ea6fbe323be87d46f5aafddcea84da98e036e12f75dff63cd42d8938 not found: ID does not exist" Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.547970 4925 scope.go:117] "RemoveContainer" containerID="7d24ab7071c984e0b74dcd321403d65fc508a558dc2ea09dd3e1a0bb36534bf4" Feb 02 11:36:04 crc kubenswrapper[4925]: E0202 11:36:04.548249 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d24ab7071c984e0b74dcd321403d65fc508a558dc2ea09dd3e1a0bb36534bf4\": container with ID starting with 7d24ab7071c984e0b74dcd321403d65fc508a558dc2ea09dd3e1a0bb36534bf4 not found: ID does not exist" containerID="7d24ab7071c984e0b74dcd321403d65fc508a558dc2ea09dd3e1a0bb36534bf4" Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.548282 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d24ab7071c984e0b74dcd321403d65fc508a558dc2ea09dd3e1a0bb36534bf4"} err="failed to get container status \"7d24ab7071c984e0b74dcd321403d65fc508a558dc2ea09dd3e1a0bb36534bf4\": rpc error: code = NotFound desc = could not find container \"7d24ab7071c984e0b74dcd321403d65fc508a558dc2ea09dd3e1a0bb36534bf4\": container with ID starting with 7d24ab7071c984e0b74dcd321403d65fc508a558dc2ea09dd3e1a0bb36534bf4 not found: ID does not exist" Feb 02 11:36:04 crc kubenswrapper[4925]: I0202 11:36:04.692480 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d95623f-c40d-4e66-aa98-138d58f43b8c" path="/var/lib/kubelet/pods/4d95623f-c40d-4e66-aa98-138d58f43b8c/volumes" Feb 02 11:36:10 crc kubenswrapper[4925]: I0202 11:36:10.512091 4925 generic.go:334] "Generic (PLEG): container finished" podID="34087aed-542d-424c-a71e-a277cf32d94c" containerID="1116afc42c8dd1915890aedf4a9e90c959b876bec968120b80abd94bb4321e6e" exitCode=0 Feb 02 11:36:10 crc kubenswrapper[4925]: I0202 11:36:10.512181 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" event={"ID":"34087aed-542d-424c-a71e-a277cf32d94c","Type":"ContainerDied","Data":"1116afc42c8dd1915890aedf4a9e90c959b876bec968120b80abd94bb4321e6e"} Feb 02 11:36:11 crc kubenswrapper[4925]: I0202 11:36:11.996395 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.135908 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-inventory\") pod \"34087aed-542d-424c-a71e-a277cf32d94c\" (UID: \"34087aed-542d-424c-a71e-a277cf32d94c\") " Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.136041 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29zhh\" (UniqueName: \"kubernetes.io/projected/34087aed-542d-424c-a71e-a277cf32d94c-kube-api-access-29zhh\") pod \"34087aed-542d-424c-a71e-a277cf32d94c\" (UID: \"34087aed-542d-424c-a71e-a277cf32d94c\") " Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.136236 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-ssh-key-openstack-edpm-ipam\") pod \"34087aed-542d-424c-a71e-a277cf32d94c\" (UID: \"34087aed-542d-424c-a71e-a277cf32d94c\") " Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.136337 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-ceph\") pod \"34087aed-542d-424c-a71e-a277cf32d94c\" (UID: \"34087aed-542d-424c-a71e-a277cf32d94c\") " Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.141940 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-ceph" (OuterVolumeSpecName: "ceph") pod "34087aed-542d-424c-a71e-a277cf32d94c" (UID: "34087aed-542d-424c-a71e-a277cf32d94c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.142192 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34087aed-542d-424c-a71e-a277cf32d94c-kube-api-access-29zhh" (OuterVolumeSpecName: "kube-api-access-29zhh") pod "34087aed-542d-424c-a71e-a277cf32d94c" (UID: "34087aed-542d-424c-a71e-a277cf32d94c"). InnerVolumeSpecName "kube-api-access-29zhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.160951 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "34087aed-542d-424c-a71e-a277cf32d94c" (UID: "34087aed-542d-424c-a71e-a277cf32d94c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.163261 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-inventory" (OuterVolumeSpecName: "inventory") pod "34087aed-542d-424c-a71e-a277cf32d94c" (UID: "34087aed-542d-424c-a71e-a277cf32d94c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.238428 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.238471 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.238485 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29zhh\" (UniqueName: \"kubernetes.io/projected/34087aed-542d-424c-a71e-a277cf32d94c-kube-api-access-29zhh\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.238500 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34087aed-542d-424c-a71e-a277cf32d94c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.529323 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" event={"ID":"34087aed-542d-424c-a71e-a277cf32d94c","Type":"ContainerDied","Data":"ab43c6bfb3f911d7c5e8151c0c23840aa99ef143a4f8e49ab24d5a779dcb4b11"} Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.529359 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab43c6bfb3f911d7c5e8151c0c23840aa99ef143a4f8e49ab24d5a779dcb4b11" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.529364 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.618058 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p"] Feb 02 11:36:12 crc kubenswrapper[4925]: E0202 11:36:12.618568 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34087aed-542d-424c-a71e-a277cf32d94c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.618589 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="34087aed-542d-424c-a71e-a277cf32d94c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:36:12 crc kubenswrapper[4925]: E0202 11:36:12.618613 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d95623f-c40d-4e66-aa98-138d58f43b8c" containerName="extract-content" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.618619 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d95623f-c40d-4e66-aa98-138d58f43b8c" containerName="extract-content" Feb 02 11:36:12 crc kubenswrapper[4925]: E0202 11:36:12.618636 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d95623f-c40d-4e66-aa98-138d58f43b8c" containerName="registry-server" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.618644 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d95623f-c40d-4e66-aa98-138d58f43b8c" containerName="registry-server" Feb 02 11:36:12 crc kubenswrapper[4925]: E0202 11:36:12.618656 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d95623f-c40d-4e66-aa98-138d58f43b8c" containerName="extract-utilities" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.618663 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d95623f-c40d-4e66-aa98-138d58f43b8c" containerName="extract-utilities" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.618833 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="34087aed-542d-424c-a71e-a277cf32d94c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.618845 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d95623f-c40d-4e66-aa98-138d58f43b8c" containerName="registry-server" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.619587 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.621815 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.621862 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.621815 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.621952 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.625475 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.626972 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p"] Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.752596 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zb42p\" (UID: \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.752646 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zb42p\" (UID: \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.752716 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zb42p\" (UID: \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.752738 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb45d\" (UniqueName: \"kubernetes.io/projected/1d4b3b51-6672-4310-92e9-5a5c88c192ba-kube-api-access-zb45d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zb42p\" (UID: \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.854440 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zb42p\" (UID: \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.854501 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zb42p\" (UID: \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.854588 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zb42p\" (UID: \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.854614 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb45d\" (UniqueName: \"kubernetes.io/projected/1d4b3b51-6672-4310-92e9-5a5c88c192ba-kube-api-access-zb45d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zb42p\" (UID: \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.858796 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zb42p\" (UID: \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.858831 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zb42p\" (UID: \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.861178 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zb42p\" (UID: \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.874607 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb45d\" (UniqueName: \"kubernetes.io/projected/1d4b3b51-6672-4310-92e9-5a5c88c192ba-kube-api-access-zb45d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zb42p\" (UID: \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" Feb 02 11:36:12 crc kubenswrapper[4925]: I0202 11:36:12.938862 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" Feb 02 11:36:13 crc kubenswrapper[4925]: I0202 11:36:13.399326 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:36:13 crc kubenswrapper[4925]: I0202 11:36:13.399641 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:36:13 crc kubenswrapper[4925]: I0202 11:36:13.399684 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 11:36:13 crc kubenswrapper[4925]: I0202 11:36:13.400433 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:36:13 crc kubenswrapper[4925]: I0202 11:36:13.400498 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" gracePeriod=600 Feb 02 11:36:13 crc kubenswrapper[4925]: I0202 11:36:13.470001 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p"] Feb 02 11:36:13 crc kubenswrapper[4925]: E0202 11:36:13.530972 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:36:13 crc kubenswrapper[4925]: I0202 11:36:13.539823 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" exitCode=0 Feb 02 11:36:13 crc kubenswrapper[4925]: I0202 11:36:13.539881 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42"} Feb 02 11:36:13 crc kubenswrapper[4925]: I0202 11:36:13.539944 4925 scope.go:117] "RemoveContainer" containerID="32e3054c374b3ba80c40613fe344e538f1d343435befd2568b4687e277c37ce4" Feb 02 11:36:13 crc kubenswrapper[4925]: I0202 11:36:13.540785 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:36:13 crc kubenswrapper[4925]: E0202 11:36:13.541194 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:36:13 crc kubenswrapper[4925]: I0202 11:36:13.541835 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" event={"ID":"1d4b3b51-6672-4310-92e9-5a5c88c192ba","Type":"ContainerStarted","Data":"c93f063fea4d4c67dec78bfaaa5700b65c0384ed0030000107fe9cf41edb832b"} Feb 02 11:36:14 crc kubenswrapper[4925]: I0202 11:36:14.551395 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" event={"ID":"1d4b3b51-6672-4310-92e9-5a5c88c192ba","Type":"ContainerStarted","Data":"c0cc66a0b5bd0729b037c50d7795aafe657e7e75243c0bf246228ab71b16524d"} Feb 02 11:36:14 crc kubenswrapper[4925]: I0202 11:36:14.585941 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" podStartSLOduration=2.118198045 podStartE2EDuration="2.585919994s" podCreationTimestamp="2026-02-02 11:36:12 +0000 UTC" firstStartedPulling="2026-02-02 11:36:13.475386657 +0000 UTC m=+2350.479635619" lastFinishedPulling="2026-02-02 11:36:13.943108606 +0000 UTC m=+2350.947357568" observedRunningTime="2026-02-02 11:36:14.579632063 +0000 UTC m=+2351.583881025" watchObservedRunningTime="2026-02-02 11:36:14.585919994 +0000 UTC m=+2351.590168956" Feb 02 11:36:21 crc kubenswrapper[4925]: I0202 11:36:21.516472 4925 generic.go:334] "Generic (PLEG): container finished" podID="1d4b3b51-6672-4310-92e9-5a5c88c192ba" containerID="c0cc66a0b5bd0729b037c50d7795aafe657e7e75243c0bf246228ab71b16524d" exitCode=0 Feb 02 11:36:21 crc kubenswrapper[4925]: I0202 11:36:21.516593 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" event={"ID":"1d4b3b51-6672-4310-92e9-5a5c88c192ba","Type":"ContainerDied","Data":"c0cc66a0b5bd0729b037c50d7795aafe657e7e75243c0bf246228ab71b16524d"} Feb 02 11:36:22 crc kubenswrapper[4925]: I0202 11:36:22.930415 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.011595 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-ssh-key-openstack-edpm-ipam\") pod \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\" (UID: \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\") " Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.012460 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb45d\" (UniqueName: \"kubernetes.io/projected/1d4b3b51-6672-4310-92e9-5a5c88c192ba-kube-api-access-zb45d\") pod \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\" (UID: \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\") " Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.012602 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-inventory\") pod \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\" (UID: \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\") " Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.012655 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-ceph\") pod \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\" (UID: \"1d4b3b51-6672-4310-92e9-5a5c88c192ba\") " Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.017729 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d4b3b51-6672-4310-92e9-5a5c88c192ba-kube-api-access-zb45d" (OuterVolumeSpecName: "kube-api-access-zb45d") pod "1d4b3b51-6672-4310-92e9-5a5c88c192ba" (UID: "1d4b3b51-6672-4310-92e9-5a5c88c192ba"). InnerVolumeSpecName "kube-api-access-zb45d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.017738 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-ceph" (OuterVolumeSpecName: "ceph") pod "1d4b3b51-6672-4310-92e9-5a5c88c192ba" (UID: "1d4b3b51-6672-4310-92e9-5a5c88c192ba"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.040522 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1d4b3b51-6672-4310-92e9-5a5c88c192ba" (UID: "1d4b3b51-6672-4310-92e9-5a5c88c192ba"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.042574 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-inventory" (OuterVolumeSpecName: "inventory") pod "1d4b3b51-6672-4310-92e9-5a5c88c192ba" (UID: "1d4b3b51-6672-4310-92e9-5a5c88c192ba"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.114664 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.114709 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.114720 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d4b3b51-6672-4310-92e9-5a5c88c192ba-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.114732 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb45d\" (UniqueName: \"kubernetes.io/projected/1d4b3b51-6672-4310-92e9-5a5c88c192ba-kube-api-access-zb45d\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.533790 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" event={"ID":"1d4b3b51-6672-4310-92e9-5a5c88c192ba","Type":"ContainerDied","Data":"c93f063fea4d4c67dec78bfaaa5700b65c0384ed0030000107fe9cf41edb832b"} Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.533851 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c93f063fea4d4c67dec78bfaaa5700b65c0384ed0030000107fe9cf41edb832b" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.533858 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zb42p" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.729688 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc"] Feb 02 11:36:23 crc kubenswrapper[4925]: E0202 11:36:23.730173 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4b3b51-6672-4310-92e9-5a5c88c192ba" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.730196 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4b3b51-6672-4310-92e9-5a5c88c192ba" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.730413 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d4b3b51-6672-4310-92e9-5a5c88c192ba" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.731168 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.735731 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.736670 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.736897 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.739558 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.740703 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.749419 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc"] Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.929591 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pp5mc\" (UID: \"ef8d17fd-9d76-4856-9308-9d7630003827\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.929640 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr4t9\" (UniqueName: \"kubernetes.io/projected/ef8d17fd-9d76-4856-9308-9d7630003827-kube-api-access-rr4t9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pp5mc\" (UID: \"ef8d17fd-9d76-4856-9308-9d7630003827\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.929793 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pp5mc\" (UID: \"ef8d17fd-9d76-4856-9308-9d7630003827\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" Feb 02 11:36:23 crc kubenswrapper[4925]: I0202 11:36:23.929891 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pp5mc\" (UID: \"ef8d17fd-9d76-4856-9308-9d7630003827\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" Feb 02 11:36:24 crc kubenswrapper[4925]: I0202 11:36:24.032734 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pp5mc\" (UID: \"ef8d17fd-9d76-4856-9308-9d7630003827\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" Feb 02 11:36:24 crc kubenswrapper[4925]: I0202 11:36:24.033005 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr4t9\" (UniqueName: \"kubernetes.io/projected/ef8d17fd-9d76-4856-9308-9d7630003827-kube-api-access-rr4t9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pp5mc\" (UID: \"ef8d17fd-9d76-4856-9308-9d7630003827\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" Feb 02 11:36:24 crc kubenswrapper[4925]: I0202 11:36:24.033094 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pp5mc\" (UID: \"ef8d17fd-9d76-4856-9308-9d7630003827\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" Feb 02 11:36:24 crc kubenswrapper[4925]: I0202 11:36:24.033144 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pp5mc\" (UID: \"ef8d17fd-9d76-4856-9308-9d7630003827\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" Feb 02 11:36:24 crc kubenswrapper[4925]: I0202 11:36:24.037834 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pp5mc\" (UID: \"ef8d17fd-9d76-4856-9308-9d7630003827\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" Feb 02 11:36:24 crc kubenswrapper[4925]: I0202 11:36:24.039298 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pp5mc\" (UID: \"ef8d17fd-9d76-4856-9308-9d7630003827\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" Feb 02 11:36:24 crc kubenswrapper[4925]: I0202 11:36:24.055200 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pp5mc\" (UID: \"ef8d17fd-9d76-4856-9308-9d7630003827\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" Feb 02 11:36:24 crc kubenswrapper[4925]: I0202 11:36:24.064000 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr4t9\" (UniqueName: \"kubernetes.io/projected/ef8d17fd-9d76-4856-9308-9d7630003827-kube-api-access-rr4t9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pp5mc\" (UID: \"ef8d17fd-9d76-4856-9308-9d7630003827\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" Feb 02 11:36:24 crc kubenswrapper[4925]: I0202 11:36:24.347520 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" Feb 02 11:36:24 crc kubenswrapper[4925]: I0202 11:36:24.669738 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:36:24 crc kubenswrapper[4925]: E0202 11:36:24.670460 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:36:24 crc kubenswrapper[4925]: I0202 11:36:24.965783 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc"] Feb 02 11:36:25 crc kubenswrapper[4925]: I0202 11:36:25.550893 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" event={"ID":"ef8d17fd-9d76-4856-9308-9d7630003827","Type":"ContainerStarted","Data":"806c642e389c7dc7eae806fae6fba2bcf3b8d2828fcd1950a0ab8f699fe6b6f0"} Feb 02 11:36:26 crc kubenswrapper[4925]: I0202 11:36:26.559452 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" event={"ID":"ef8d17fd-9d76-4856-9308-9d7630003827","Type":"ContainerStarted","Data":"fa5a42b2d0e226fde5f6f0116e774d6ea791b2c373398413ebf5588f7603a7f5"} Feb 02 11:36:26 crc kubenswrapper[4925]: I0202 11:36:26.580196 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" podStartSLOduration=3.154645255 podStartE2EDuration="3.580180115s" podCreationTimestamp="2026-02-02 11:36:23 +0000 UTC" firstStartedPulling="2026-02-02 11:36:24.973309149 +0000 UTC m=+2361.977558151" lastFinishedPulling="2026-02-02 11:36:25.398844049 +0000 UTC m=+2362.403093011" observedRunningTime="2026-02-02 11:36:26.573879313 +0000 UTC m=+2363.578128275" watchObservedRunningTime="2026-02-02 11:36:26.580180115 +0000 UTC m=+2363.584429067" Feb 02 11:36:29 crc kubenswrapper[4925]: I0202 11:36:29.393428 4925 scope.go:117] "RemoveContainer" containerID="60a1b354c24d39924ebc537ea75bed0b3ad4c327b53caaa78f0368ae59ebb043" Feb 02 11:36:39 crc kubenswrapper[4925]: I0202 11:36:39.664001 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:36:39 crc kubenswrapper[4925]: E0202 11:36:39.664819 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:36:53 crc kubenswrapper[4925]: I0202 11:36:53.664593 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:36:53 crc kubenswrapper[4925]: E0202 11:36:53.665401 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:36:57 crc kubenswrapper[4925]: I0202 11:36:57.803187 4925 generic.go:334] "Generic (PLEG): container finished" podID="ef8d17fd-9d76-4856-9308-9d7630003827" containerID="fa5a42b2d0e226fde5f6f0116e774d6ea791b2c373398413ebf5588f7603a7f5" exitCode=0 Feb 02 11:36:57 crc kubenswrapper[4925]: I0202 11:36:57.803263 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" event={"ID":"ef8d17fd-9d76-4856-9308-9d7630003827","Type":"ContainerDied","Data":"fa5a42b2d0e226fde5f6f0116e774d6ea791b2c373398413ebf5588f7603a7f5"} Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.203967 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.347136 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr4t9\" (UniqueName: \"kubernetes.io/projected/ef8d17fd-9d76-4856-9308-9d7630003827-kube-api-access-rr4t9\") pod \"ef8d17fd-9d76-4856-9308-9d7630003827\" (UID: \"ef8d17fd-9d76-4856-9308-9d7630003827\") " Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.347239 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-ceph\") pod \"ef8d17fd-9d76-4856-9308-9d7630003827\" (UID: \"ef8d17fd-9d76-4856-9308-9d7630003827\") " Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.347301 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-inventory\") pod \"ef8d17fd-9d76-4856-9308-9d7630003827\" (UID: \"ef8d17fd-9d76-4856-9308-9d7630003827\") " Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.347356 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-ssh-key-openstack-edpm-ipam\") pod \"ef8d17fd-9d76-4856-9308-9d7630003827\" (UID: \"ef8d17fd-9d76-4856-9308-9d7630003827\") " Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.354205 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-ceph" (OuterVolumeSpecName: "ceph") pod "ef8d17fd-9d76-4856-9308-9d7630003827" (UID: "ef8d17fd-9d76-4856-9308-9d7630003827"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.354901 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef8d17fd-9d76-4856-9308-9d7630003827-kube-api-access-rr4t9" (OuterVolumeSpecName: "kube-api-access-rr4t9") pod "ef8d17fd-9d76-4856-9308-9d7630003827" (UID: "ef8d17fd-9d76-4856-9308-9d7630003827"). InnerVolumeSpecName "kube-api-access-rr4t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.373581 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-inventory" (OuterVolumeSpecName: "inventory") pod "ef8d17fd-9d76-4856-9308-9d7630003827" (UID: "ef8d17fd-9d76-4856-9308-9d7630003827"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.376004 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ef8d17fd-9d76-4856-9308-9d7630003827" (UID: "ef8d17fd-9d76-4856-9308-9d7630003827"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.448764 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.448805 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.448818 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr4t9\" (UniqueName: \"kubernetes.io/projected/ef8d17fd-9d76-4856-9308-9d7630003827-kube-api-access-rr4t9\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.448827 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ef8d17fd-9d76-4856-9308-9d7630003827-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.818264 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" event={"ID":"ef8d17fd-9d76-4856-9308-9d7630003827","Type":"ContainerDied","Data":"806c642e389c7dc7eae806fae6fba2bcf3b8d2828fcd1950a0ab8f699fe6b6f0"} Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.818310 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="806c642e389c7dc7eae806fae6fba2bcf3b8d2828fcd1950a0ab8f699fe6b6f0" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.818313 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pp5mc" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.910592 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2"] Feb 02 11:36:59 crc kubenswrapper[4925]: E0202 11:36:59.911018 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef8d17fd-9d76-4856-9308-9d7630003827" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.911040 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef8d17fd-9d76-4856-9308-9d7630003827" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.911270 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef8d17fd-9d76-4856-9308-9d7630003827" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.911960 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.914699 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.914823 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.914907 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.914910 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.915158 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.928287 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2"] Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.959125 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2\" (UID: \"600dd95b-ee69-45e7-918b-85650f9e2980\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.959198 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmxws\" (UniqueName: \"kubernetes.io/projected/600dd95b-ee69-45e7-918b-85650f9e2980-kube-api-access-fmxws\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2\" (UID: \"600dd95b-ee69-45e7-918b-85650f9e2980\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.959284 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2\" (UID: \"600dd95b-ee69-45e7-918b-85650f9e2980\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" Feb 02 11:36:59 crc kubenswrapper[4925]: I0202 11:36:59.959321 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2\" (UID: \"600dd95b-ee69-45e7-918b-85650f9e2980\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" Feb 02 11:37:00 crc kubenswrapper[4925]: I0202 11:37:00.060428 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmxws\" (UniqueName: \"kubernetes.io/projected/600dd95b-ee69-45e7-918b-85650f9e2980-kube-api-access-fmxws\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2\" (UID: \"600dd95b-ee69-45e7-918b-85650f9e2980\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" Feb 02 11:37:00 crc kubenswrapper[4925]: I0202 11:37:00.060520 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2\" (UID: \"600dd95b-ee69-45e7-918b-85650f9e2980\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" Feb 02 11:37:00 crc kubenswrapper[4925]: I0202 11:37:00.060587 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2\" (UID: \"600dd95b-ee69-45e7-918b-85650f9e2980\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" Feb 02 11:37:00 crc kubenswrapper[4925]: I0202 11:37:00.061533 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2\" (UID: \"600dd95b-ee69-45e7-918b-85650f9e2980\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" Feb 02 11:37:00 crc kubenswrapper[4925]: I0202 11:37:00.066228 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2\" (UID: \"600dd95b-ee69-45e7-918b-85650f9e2980\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" Feb 02 11:37:00 crc kubenswrapper[4925]: I0202 11:37:00.066413 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2\" (UID: \"600dd95b-ee69-45e7-918b-85650f9e2980\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" Feb 02 11:37:00 crc kubenswrapper[4925]: I0202 11:37:00.066745 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2\" (UID: \"600dd95b-ee69-45e7-918b-85650f9e2980\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" Feb 02 11:37:00 crc kubenswrapper[4925]: I0202 11:37:00.077503 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmxws\" (UniqueName: \"kubernetes.io/projected/600dd95b-ee69-45e7-918b-85650f9e2980-kube-api-access-fmxws\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2\" (UID: \"600dd95b-ee69-45e7-918b-85650f9e2980\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" Feb 02 11:37:00 crc kubenswrapper[4925]: I0202 11:37:00.231885 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" Feb 02 11:37:00 crc kubenswrapper[4925]: I0202 11:37:00.765651 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2"] Feb 02 11:37:00 crc kubenswrapper[4925]: I0202 11:37:00.831431 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" event={"ID":"600dd95b-ee69-45e7-918b-85650f9e2980","Type":"ContainerStarted","Data":"1f53a9d32c82952611730e74b5a798a243f885cec33201fdc6bf6276df52862d"} Feb 02 11:37:01 crc kubenswrapper[4925]: I0202 11:37:01.842753 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" event={"ID":"600dd95b-ee69-45e7-918b-85650f9e2980","Type":"ContainerStarted","Data":"20f5c90259f844bab635e50051bf07bcd2a08944d136e5fc61d12a69b64d64a5"} Feb 02 11:37:01 crc kubenswrapper[4925]: I0202 11:37:01.861477 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" podStartSLOduration=2.384145807 podStartE2EDuration="2.861456028s" podCreationTimestamp="2026-02-02 11:36:59 +0000 UTC" firstStartedPulling="2026-02-02 11:37:00.785093621 +0000 UTC m=+2397.789342583" lastFinishedPulling="2026-02-02 11:37:01.262403842 +0000 UTC m=+2398.266652804" observedRunningTime="2026-02-02 11:37:01.857892881 +0000 UTC m=+2398.862141833" watchObservedRunningTime="2026-02-02 11:37:01.861456028 +0000 UTC m=+2398.865704990" Feb 02 11:37:04 crc kubenswrapper[4925]: I0202 11:37:04.865651 4925 generic.go:334] "Generic (PLEG): container finished" podID="600dd95b-ee69-45e7-918b-85650f9e2980" containerID="20f5c90259f844bab635e50051bf07bcd2a08944d136e5fc61d12a69b64d64a5" exitCode=0 Feb 02 11:37:04 crc kubenswrapper[4925]: I0202 11:37:04.865822 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" event={"ID":"600dd95b-ee69-45e7-918b-85650f9e2980","Type":"ContainerDied","Data":"20f5c90259f844bab635e50051bf07bcd2a08944d136e5fc61d12a69b64d64a5"} Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.257728 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.293712 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-ssh-key-openstack-edpm-ipam\") pod \"600dd95b-ee69-45e7-918b-85650f9e2980\" (UID: \"600dd95b-ee69-45e7-918b-85650f9e2980\") " Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.293855 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-ceph\") pod \"600dd95b-ee69-45e7-918b-85650f9e2980\" (UID: \"600dd95b-ee69-45e7-918b-85650f9e2980\") " Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.293966 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-inventory\") pod \"600dd95b-ee69-45e7-918b-85650f9e2980\" (UID: \"600dd95b-ee69-45e7-918b-85650f9e2980\") " Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.294006 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmxws\" (UniqueName: \"kubernetes.io/projected/600dd95b-ee69-45e7-918b-85650f9e2980-kube-api-access-fmxws\") pod \"600dd95b-ee69-45e7-918b-85650f9e2980\" (UID: \"600dd95b-ee69-45e7-918b-85650f9e2980\") " Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.300323 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-ceph" (OuterVolumeSpecName: "ceph") pod "600dd95b-ee69-45e7-918b-85650f9e2980" (UID: "600dd95b-ee69-45e7-918b-85650f9e2980"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.300543 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/600dd95b-ee69-45e7-918b-85650f9e2980-kube-api-access-fmxws" (OuterVolumeSpecName: "kube-api-access-fmxws") pod "600dd95b-ee69-45e7-918b-85650f9e2980" (UID: "600dd95b-ee69-45e7-918b-85650f9e2980"). InnerVolumeSpecName "kube-api-access-fmxws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.321176 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "600dd95b-ee69-45e7-918b-85650f9e2980" (UID: "600dd95b-ee69-45e7-918b-85650f9e2980"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.324989 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-inventory" (OuterVolumeSpecName: "inventory") pod "600dd95b-ee69-45e7-918b-85650f9e2980" (UID: "600dd95b-ee69-45e7-918b-85650f9e2980"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.396935 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.396960 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.396971 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/600dd95b-ee69-45e7-918b-85650f9e2980-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.397003 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmxws\" (UniqueName: \"kubernetes.io/projected/600dd95b-ee69-45e7-918b-85650f9e2980-kube-api-access-fmxws\") on node \"crc\" DevicePath \"\"" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.882251 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" event={"ID":"600dd95b-ee69-45e7-918b-85650f9e2980","Type":"ContainerDied","Data":"1f53a9d32c82952611730e74b5a798a243f885cec33201fdc6bf6276df52862d"} Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.882296 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f53a9d32c82952611730e74b5a798a243f885cec33201fdc6bf6276df52862d" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.882356 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.946160 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t"] Feb 02 11:37:06 crc kubenswrapper[4925]: E0202 11:37:06.946528 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="600dd95b-ee69-45e7-918b-85650f9e2980" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.946546 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="600dd95b-ee69-45e7-918b-85650f9e2980" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.946710 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="600dd95b-ee69-45e7-918b-85650f9e2980" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.947322 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.949252 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.955221 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.955257 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.956914 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.957247 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t"] Feb 02 11:37:06 crc kubenswrapper[4925]: I0202 11:37:06.959783 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:37:07 crc kubenswrapper[4925]: I0202 11:37:07.015707 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-csw6t\" (UID: \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" Feb 02 11:37:07 crc kubenswrapper[4925]: I0202 11:37:07.015954 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-csw6t\" (UID: \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" Feb 02 11:37:07 crc kubenswrapper[4925]: I0202 11:37:07.016025 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqf2n\" (UniqueName: \"kubernetes.io/projected/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-kube-api-access-nqf2n\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-csw6t\" (UID: \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" Feb 02 11:37:07 crc kubenswrapper[4925]: I0202 11:37:07.016213 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-csw6t\" (UID: \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" Feb 02 11:37:07 crc kubenswrapper[4925]: I0202 11:37:07.118194 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-csw6t\" (UID: \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" Feb 02 11:37:07 crc kubenswrapper[4925]: I0202 11:37:07.118275 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-csw6t\" (UID: \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" Feb 02 11:37:07 crc kubenswrapper[4925]: I0202 11:37:07.118305 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqf2n\" (UniqueName: \"kubernetes.io/projected/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-kube-api-access-nqf2n\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-csw6t\" (UID: \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" Feb 02 11:37:07 crc kubenswrapper[4925]: I0202 11:37:07.118356 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-csw6t\" (UID: \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" Feb 02 11:37:07 crc kubenswrapper[4925]: I0202 11:37:07.121994 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-csw6t\" (UID: \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" Feb 02 11:37:07 crc kubenswrapper[4925]: I0202 11:37:07.122005 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-csw6t\" (UID: \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" Feb 02 11:37:07 crc kubenswrapper[4925]: I0202 11:37:07.122232 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-csw6t\" (UID: \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" Feb 02 11:37:07 crc kubenswrapper[4925]: I0202 11:37:07.134769 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqf2n\" (UniqueName: \"kubernetes.io/projected/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-kube-api-access-nqf2n\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-csw6t\" (UID: \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" Feb 02 11:37:07 crc kubenswrapper[4925]: I0202 11:37:07.265199 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" Feb 02 11:37:07 crc kubenswrapper[4925]: I0202 11:37:07.664122 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:37:07 crc kubenswrapper[4925]: E0202 11:37:07.664731 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:37:07 crc kubenswrapper[4925]: I0202 11:37:07.753737 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t"] Feb 02 11:37:07 crc kubenswrapper[4925]: I0202 11:37:07.890613 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" event={"ID":"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14","Type":"ContainerStarted","Data":"b47e708429cdd74366530bcdfaf0b7b6a2076ac8974cef6600dee1a8687558c5"} Feb 02 11:37:08 crc kubenswrapper[4925]: I0202 11:37:08.901250 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" event={"ID":"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14","Type":"ContainerStarted","Data":"e9987a3cca32bd978bc576171707254e761cc5150fbf2442c01293fe1cf7e936"} Feb 02 11:37:21 crc kubenswrapper[4925]: I0202 11:37:21.664487 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:37:21 crc kubenswrapper[4925]: E0202 11:37:21.665402 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:37:33 crc kubenswrapper[4925]: I0202 11:37:33.331195 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" podStartSLOduration=26.676218929 podStartE2EDuration="27.331072195s" podCreationTimestamp="2026-02-02 11:37:06 +0000 UTC" firstStartedPulling="2026-02-02 11:37:07.761856583 +0000 UTC m=+2404.766105545" lastFinishedPulling="2026-02-02 11:37:08.416709849 +0000 UTC m=+2405.420958811" observedRunningTime="2026-02-02 11:37:08.92341658 +0000 UTC m=+2405.927665532" watchObservedRunningTime="2026-02-02 11:37:33.331072195 +0000 UTC m=+2430.335321207" Feb 02 11:37:33 crc kubenswrapper[4925]: I0202 11:37:33.355066 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cvsfj"] Feb 02 11:37:33 crc kubenswrapper[4925]: I0202 11:37:33.357422 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cvsfj" Feb 02 11:37:33 crc kubenswrapper[4925]: I0202 11:37:33.368397 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cvsfj"] Feb 02 11:37:33 crc kubenswrapper[4925]: I0202 11:37:33.448222 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t984g\" (UniqueName: \"kubernetes.io/projected/dc723141-c00b-42e9-9dc7-a27a5135112d-kube-api-access-t984g\") pod \"redhat-operators-cvsfj\" (UID: \"dc723141-c00b-42e9-9dc7-a27a5135112d\") " pod="openshift-marketplace/redhat-operators-cvsfj" Feb 02 11:37:33 crc kubenswrapper[4925]: I0202 11:37:33.448648 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc723141-c00b-42e9-9dc7-a27a5135112d-catalog-content\") pod \"redhat-operators-cvsfj\" (UID: \"dc723141-c00b-42e9-9dc7-a27a5135112d\") " pod="openshift-marketplace/redhat-operators-cvsfj" Feb 02 11:37:33 crc kubenswrapper[4925]: I0202 11:37:33.448790 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc723141-c00b-42e9-9dc7-a27a5135112d-utilities\") pod \"redhat-operators-cvsfj\" (UID: \"dc723141-c00b-42e9-9dc7-a27a5135112d\") " pod="openshift-marketplace/redhat-operators-cvsfj" Feb 02 11:37:33 crc kubenswrapper[4925]: I0202 11:37:33.550253 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t984g\" (UniqueName: \"kubernetes.io/projected/dc723141-c00b-42e9-9dc7-a27a5135112d-kube-api-access-t984g\") pod \"redhat-operators-cvsfj\" (UID: \"dc723141-c00b-42e9-9dc7-a27a5135112d\") " pod="openshift-marketplace/redhat-operators-cvsfj" Feb 02 11:37:33 crc kubenswrapper[4925]: I0202 11:37:33.550328 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc723141-c00b-42e9-9dc7-a27a5135112d-catalog-content\") pod \"redhat-operators-cvsfj\" (UID: \"dc723141-c00b-42e9-9dc7-a27a5135112d\") " pod="openshift-marketplace/redhat-operators-cvsfj" Feb 02 11:37:33 crc kubenswrapper[4925]: I0202 11:37:33.550350 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc723141-c00b-42e9-9dc7-a27a5135112d-utilities\") pod \"redhat-operators-cvsfj\" (UID: \"dc723141-c00b-42e9-9dc7-a27a5135112d\") " pod="openshift-marketplace/redhat-operators-cvsfj" Feb 02 11:37:33 crc kubenswrapper[4925]: I0202 11:37:33.550974 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc723141-c00b-42e9-9dc7-a27a5135112d-utilities\") pod \"redhat-operators-cvsfj\" (UID: \"dc723141-c00b-42e9-9dc7-a27a5135112d\") " pod="openshift-marketplace/redhat-operators-cvsfj" Feb 02 11:37:33 crc kubenswrapper[4925]: I0202 11:37:33.551098 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc723141-c00b-42e9-9dc7-a27a5135112d-catalog-content\") pod \"redhat-operators-cvsfj\" (UID: \"dc723141-c00b-42e9-9dc7-a27a5135112d\") " pod="openshift-marketplace/redhat-operators-cvsfj" Feb 02 11:37:33 crc kubenswrapper[4925]: I0202 11:37:33.578846 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t984g\" (UniqueName: \"kubernetes.io/projected/dc723141-c00b-42e9-9dc7-a27a5135112d-kube-api-access-t984g\") pod \"redhat-operators-cvsfj\" (UID: \"dc723141-c00b-42e9-9dc7-a27a5135112d\") " pod="openshift-marketplace/redhat-operators-cvsfj" Feb 02 11:37:33 crc kubenswrapper[4925]: I0202 11:37:33.664956 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:37:33 crc kubenswrapper[4925]: E0202 11:37:33.665386 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:37:33 crc kubenswrapper[4925]: I0202 11:37:33.683484 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cvsfj" Feb 02 11:37:33 crc kubenswrapper[4925]: I0202 11:37:33.966580 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cvsfj"] Feb 02 11:37:34 crc kubenswrapper[4925]: I0202 11:37:34.097068 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvsfj" event={"ID":"dc723141-c00b-42e9-9dc7-a27a5135112d","Type":"ContainerStarted","Data":"ec24c88397e4744284f0044fe1e2fe51d1242019eb4fe8abb442adf60b3026c9"} Feb 02 11:37:35 crc kubenswrapper[4925]: I0202 11:37:35.109132 4925 generic.go:334] "Generic (PLEG): container finished" podID="dc723141-c00b-42e9-9dc7-a27a5135112d" containerID="c6d420bba4349feb8729fe311d58fda2c705d48efb9e6d38172d8fb2c653c104" exitCode=0 Feb 02 11:37:35 crc kubenswrapper[4925]: I0202 11:37:35.109185 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvsfj" event={"ID":"dc723141-c00b-42e9-9dc7-a27a5135112d","Type":"ContainerDied","Data":"c6d420bba4349feb8729fe311d58fda2c705d48efb9e6d38172d8fb2c653c104"} Feb 02 11:37:36 crc kubenswrapper[4925]: I0202 11:37:36.119202 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvsfj" event={"ID":"dc723141-c00b-42e9-9dc7-a27a5135112d","Type":"ContainerStarted","Data":"54e1a421e4b0ce7deaf2e39e1a1c4b5904496110efd0f420aa15304f86586642"} Feb 02 11:37:37 crc kubenswrapper[4925]: I0202 11:37:37.127162 4925 generic.go:334] "Generic (PLEG): container finished" podID="dc723141-c00b-42e9-9dc7-a27a5135112d" containerID="54e1a421e4b0ce7deaf2e39e1a1c4b5904496110efd0f420aa15304f86586642" exitCode=0 Feb 02 11:37:37 crc kubenswrapper[4925]: I0202 11:37:37.127202 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvsfj" event={"ID":"dc723141-c00b-42e9-9dc7-a27a5135112d","Type":"ContainerDied","Data":"54e1a421e4b0ce7deaf2e39e1a1c4b5904496110efd0f420aa15304f86586642"} Feb 02 11:37:38 crc kubenswrapper[4925]: I0202 11:37:38.138401 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvsfj" event={"ID":"dc723141-c00b-42e9-9dc7-a27a5135112d","Type":"ContainerStarted","Data":"81ca0461c5f97efa99bb8c735bb243a4f90ec98317faa16bc7c16fd61d30116a"} Feb 02 11:37:38 crc kubenswrapper[4925]: I0202 11:37:38.162445 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cvsfj" podStartSLOduration=2.675043525 podStartE2EDuration="5.162415953s" podCreationTimestamp="2026-02-02 11:37:33 +0000 UTC" firstStartedPulling="2026-02-02 11:37:35.111119636 +0000 UTC m=+2432.115368598" lastFinishedPulling="2026-02-02 11:37:37.598492064 +0000 UTC m=+2434.602741026" observedRunningTime="2026-02-02 11:37:38.154427335 +0000 UTC m=+2435.158676297" watchObservedRunningTime="2026-02-02 11:37:38.162415953 +0000 UTC m=+2435.166664915" Feb 02 11:37:43 crc kubenswrapper[4925]: I0202 11:37:43.683931 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cvsfj" Feb 02 11:37:43 crc kubenswrapper[4925]: I0202 11:37:43.684529 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cvsfj" Feb 02 11:37:43 crc kubenswrapper[4925]: I0202 11:37:43.739731 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cvsfj" Feb 02 11:37:44 crc kubenswrapper[4925]: I0202 11:37:44.247244 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cvsfj" Feb 02 11:37:44 crc kubenswrapper[4925]: I0202 11:37:44.294768 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cvsfj"] Feb 02 11:37:46 crc kubenswrapper[4925]: I0202 11:37:46.205434 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cvsfj" podUID="dc723141-c00b-42e9-9dc7-a27a5135112d" containerName="registry-server" containerID="cri-o://81ca0461c5f97efa99bb8c735bb243a4f90ec98317faa16bc7c16fd61d30116a" gracePeriod=2 Feb 02 11:37:46 crc kubenswrapper[4925]: I0202 11:37:46.655166 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cvsfj" Feb 02 11:37:46 crc kubenswrapper[4925]: I0202 11:37:46.744912 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc723141-c00b-42e9-9dc7-a27a5135112d-catalog-content\") pod \"dc723141-c00b-42e9-9dc7-a27a5135112d\" (UID: \"dc723141-c00b-42e9-9dc7-a27a5135112d\") " Feb 02 11:37:46 crc kubenswrapper[4925]: I0202 11:37:46.745061 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t984g\" (UniqueName: \"kubernetes.io/projected/dc723141-c00b-42e9-9dc7-a27a5135112d-kube-api-access-t984g\") pod \"dc723141-c00b-42e9-9dc7-a27a5135112d\" (UID: \"dc723141-c00b-42e9-9dc7-a27a5135112d\") " Feb 02 11:37:46 crc kubenswrapper[4925]: I0202 11:37:46.745197 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc723141-c00b-42e9-9dc7-a27a5135112d-utilities\") pod \"dc723141-c00b-42e9-9dc7-a27a5135112d\" (UID: \"dc723141-c00b-42e9-9dc7-a27a5135112d\") " Feb 02 11:37:46 crc kubenswrapper[4925]: I0202 11:37:46.747931 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc723141-c00b-42e9-9dc7-a27a5135112d-utilities" (OuterVolumeSpecName: "utilities") pod "dc723141-c00b-42e9-9dc7-a27a5135112d" (UID: "dc723141-c00b-42e9-9dc7-a27a5135112d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:37:46 crc kubenswrapper[4925]: I0202 11:37:46.754854 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc723141-c00b-42e9-9dc7-a27a5135112d-kube-api-access-t984g" (OuterVolumeSpecName: "kube-api-access-t984g") pod "dc723141-c00b-42e9-9dc7-a27a5135112d" (UID: "dc723141-c00b-42e9-9dc7-a27a5135112d"). InnerVolumeSpecName "kube-api-access-t984g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:37:46 crc kubenswrapper[4925]: I0202 11:37:46.847685 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t984g\" (UniqueName: \"kubernetes.io/projected/dc723141-c00b-42e9-9dc7-a27a5135112d-kube-api-access-t984g\") on node \"crc\" DevicePath \"\"" Feb 02 11:37:46 crc kubenswrapper[4925]: I0202 11:37:46.847729 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc723141-c00b-42e9-9dc7-a27a5135112d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:37:46 crc kubenswrapper[4925]: I0202 11:37:46.864960 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc723141-c00b-42e9-9dc7-a27a5135112d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc723141-c00b-42e9-9dc7-a27a5135112d" (UID: "dc723141-c00b-42e9-9dc7-a27a5135112d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:37:46 crc kubenswrapper[4925]: I0202 11:37:46.949521 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc723141-c00b-42e9-9dc7-a27a5135112d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.214560 4925 generic.go:334] "Generic (PLEG): container finished" podID="a07f6a0e-2ed8-4213-be0f-ed8ae1005a14" containerID="e9987a3cca32bd978bc576171707254e761cc5150fbf2442c01293fe1cf7e936" exitCode=0 Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.214678 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" event={"ID":"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14","Type":"ContainerDied","Data":"e9987a3cca32bd978bc576171707254e761cc5150fbf2442c01293fe1cf7e936"} Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.217283 4925 generic.go:334] "Generic (PLEG): container finished" podID="dc723141-c00b-42e9-9dc7-a27a5135112d" containerID="81ca0461c5f97efa99bb8c735bb243a4f90ec98317faa16bc7c16fd61d30116a" exitCode=0 Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.217324 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvsfj" event={"ID":"dc723141-c00b-42e9-9dc7-a27a5135112d","Type":"ContainerDied","Data":"81ca0461c5f97efa99bb8c735bb243a4f90ec98317faa16bc7c16fd61d30116a"} Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.217387 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cvsfj" event={"ID":"dc723141-c00b-42e9-9dc7-a27a5135112d","Type":"ContainerDied","Data":"ec24c88397e4744284f0044fe1e2fe51d1242019eb4fe8abb442adf60b3026c9"} Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.217416 4925 scope.go:117] "RemoveContainer" containerID="81ca0461c5f97efa99bb8c735bb243a4f90ec98317faa16bc7c16fd61d30116a" Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.217344 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cvsfj" Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.255042 4925 scope.go:117] "RemoveContainer" containerID="54e1a421e4b0ce7deaf2e39e1a1c4b5904496110efd0f420aa15304f86586642" Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.257938 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cvsfj"] Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.270678 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cvsfj"] Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.283406 4925 scope.go:117] "RemoveContainer" containerID="c6d420bba4349feb8729fe311d58fda2c705d48efb9e6d38172d8fb2c653c104" Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.314524 4925 scope.go:117] "RemoveContainer" containerID="81ca0461c5f97efa99bb8c735bb243a4f90ec98317faa16bc7c16fd61d30116a" Feb 02 11:37:47 crc kubenswrapper[4925]: E0202 11:37:47.314953 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81ca0461c5f97efa99bb8c735bb243a4f90ec98317faa16bc7c16fd61d30116a\": container with ID starting with 81ca0461c5f97efa99bb8c735bb243a4f90ec98317faa16bc7c16fd61d30116a not found: ID does not exist" containerID="81ca0461c5f97efa99bb8c735bb243a4f90ec98317faa16bc7c16fd61d30116a" Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.314991 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81ca0461c5f97efa99bb8c735bb243a4f90ec98317faa16bc7c16fd61d30116a"} err="failed to get container status \"81ca0461c5f97efa99bb8c735bb243a4f90ec98317faa16bc7c16fd61d30116a\": rpc error: code = NotFound desc = could not find container \"81ca0461c5f97efa99bb8c735bb243a4f90ec98317faa16bc7c16fd61d30116a\": container with ID starting with 81ca0461c5f97efa99bb8c735bb243a4f90ec98317faa16bc7c16fd61d30116a not found: ID does not exist" Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.315016 4925 scope.go:117] "RemoveContainer" containerID="54e1a421e4b0ce7deaf2e39e1a1c4b5904496110efd0f420aa15304f86586642" Feb 02 11:37:47 crc kubenswrapper[4925]: E0202 11:37:47.315411 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54e1a421e4b0ce7deaf2e39e1a1c4b5904496110efd0f420aa15304f86586642\": container with ID starting with 54e1a421e4b0ce7deaf2e39e1a1c4b5904496110efd0f420aa15304f86586642 not found: ID does not exist" containerID="54e1a421e4b0ce7deaf2e39e1a1c4b5904496110efd0f420aa15304f86586642" Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.315433 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54e1a421e4b0ce7deaf2e39e1a1c4b5904496110efd0f420aa15304f86586642"} err="failed to get container status \"54e1a421e4b0ce7deaf2e39e1a1c4b5904496110efd0f420aa15304f86586642\": rpc error: code = NotFound desc = could not find container \"54e1a421e4b0ce7deaf2e39e1a1c4b5904496110efd0f420aa15304f86586642\": container with ID starting with 54e1a421e4b0ce7deaf2e39e1a1c4b5904496110efd0f420aa15304f86586642 not found: ID does not exist" Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.315448 4925 scope.go:117] "RemoveContainer" containerID="c6d420bba4349feb8729fe311d58fda2c705d48efb9e6d38172d8fb2c653c104" Feb 02 11:37:47 crc kubenswrapper[4925]: E0202 11:37:47.315621 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d420bba4349feb8729fe311d58fda2c705d48efb9e6d38172d8fb2c653c104\": container with ID starting with c6d420bba4349feb8729fe311d58fda2c705d48efb9e6d38172d8fb2c653c104 not found: ID does not exist" containerID="c6d420bba4349feb8729fe311d58fda2c705d48efb9e6d38172d8fb2c653c104" Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.315641 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d420bba4349feb8729fe311d58fda2c705d48efb9e6d38172d8fb2c653c104"} err="failed to get container status \"c6d420bba4349feb8729fe311d58fda2c705d48efb9e6d38172d8fb2c653c104\": rpc error: code = NotFound desc = could not find container \"c6d420bba4349feb8729fe311d58fda2c705d48efb9e6d38172d8fb2c653c104\": container with ID starting with c6d420bba4349feb8729fe311d58fda2c705d48efb9e6d38172d8fb2c653c104 not found: ID does not exist" Feb 02 11:37:47 crc kubenswrapper[4925]: I0202 11:37:47.664534 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:37:47 crc kubenswrapper[4925]: E0202 11:37:47.664814 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:37:48 crc kubenswrapper[4925]: I0202 11:37:48.635726 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" Feb 02 11:37:48 crc kubenswrapper[4925]: I0202 11:37:48.673874 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc723141-c00b-42e9-9dc7-a27a5135112d" path="/var/lib/kubelet/pods/dc723141-c00b-42e9-9dc7-a27a5135112d/volumes" Feb 02 11:37:48 crc kubenswrapper[4925]: I0202 11:37:48.782578 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-inventory\") pod \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\" (UID: \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\") " Feb 02 11:37:48 crc kubenswrapper[4925]: I0202 11:37:48.782770 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqf2n\" (UniqueName: \"kubernetes.io/projected/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-kube-api-access-nqf2n\") pod \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\" (UID: \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\") " Feb 02 11:37:48 crc kubenswrapper[4925]: I0202 11:37:48.782850 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-ssh-key-openstack-edpm-ipam\") pod \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\" (UID: \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\") " Feb 02 11:37:48 crc kubenswrapper[4925]: I0202 11:37:48.782881 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-ceph\") pod \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\" (UID: \"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14\") " Feb 02 11:37:48 crc kubenswrapper[4925]: I0202 11:37:48.788953 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-ceph" (OuterVolumeSpecName: "ceph") pod "a07f6a0e-2ed8-4213-be0f-ed8ae1005a14" (UID: "a07f6a0e-2ed8-4213-be0f-ed8ae1005a14"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:37:48 crc kubenswrapper[4925]: I0202 11:37:48.793207 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-kube-api-access-nqf2n" (OuterVolumeSpecName: "kube-api-access-nqf2n") pod "a07f6a0e-2ed8-4213-be0f-ed8ae1005a14" (UID: "a07f6a0e-2ed8-4213-be0f-ed8ae1005a14"). InnerVolumeSpecName "kube-api-access-nqf2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:37:48 crc kubenswrapper[4925]: I0202 11:37:48.809237 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a07f6a0e-2ed8-4213-be0f-ed8ae1005a14" (UID: "a07f6a0e-2ed8-4213-be0f-ed8ae1005a14"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:37:48 crc kubenswrapper[4925]: I0202 11:37:48.817136 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-inventory" (OuterVolumeSpecName: "inventory") pod "a07f6a0e-2ed8-4213-be0f-ed8ae1005a14" (UID: "a07f6a0e-2ed8-4213-be0f-ed8ae1005a14"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:37:48 crc kubenswrapper[4925]: I0202 11:37:48.884697 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:37:48 crc kubenswrapper[4925]: I0202 11:37:48.884732 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:37:48 crc kubenswrapper[4925]: I0202 11:37:48.884741 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:37:48 crc kubenswrapper[4925]: I0202 11:37:48.884750 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqf2n\" (UniqueName: \"kubernetes.io/projected/a07f6a0e-2ed8-4213-be0f-ed8ae1005a14-kube-api-access-nqf2n\") on node \"crc\" DevicePath \"\"" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.261912 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" event={"ID":"a07f6a0e-2ed8-4213-be0f-ed8ae1005a14","Type":"ContainerDied","Data":"b47e708429cdd74366530bcdfaf0b7b6a2076ac8974cef6600dee1a8687558c5"} Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.262007 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b47e708429cdd74366530bcdfaf0b7b6a2076ac8974cef6600dee1a8687558c5" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.261980 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-csw6t" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.328735 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nl7n4"] Feb 02 11:37:49 crc kubenswrapper[4925]: E0202 11:37:49.329487 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc723141-c00b-42e9-9dc7-a27a5135112d" containerName="extract-content" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.329506 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc723141-c00b-42e9-9dc7-a27a5135112d" containerName="extract-content" Feb 02 11:37:49 crc kubenswrapper[4925]: E0202 11:37:49.329540 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc723141-c00b-42e9-9dc7-a27a5135112d" containerName="extract-utilities" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.329547 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc723141-c00b-42e9-9dc7-a27a5135112d" containerName="extract-utilities" Feb 02 11:37:49 crc kubenswrapper[4925]: E0202 11:37:49.329563 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a07f6a0e-2ed8-4213-be0f-ed8ae1005a14" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.329571 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="a07f6a0e-2ed8-4213-be0f-ed8ae1005a14" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:37:49 crc kubenswrapper[4925]: E0202 11:37:49.329582 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc723141-c00b-42e9-9dc7-a27a5135112d" containerName="registry-server" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.329588 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc723141-c00b-42e9-9dc7-a27a5135112d" containerName="registry-server" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.329752 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc723141-c00b-42e9-9dc7-a27a5135112d" containerName="registry-server" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.329774 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="a07f6a0e-2ed8-4213-be0f-ed8ae1005a14" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.330379 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.333307 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.333361 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.333593 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.333721 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.333822 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.350767 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nl7n4"] Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.393240 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-ceph\") pod \"ssh-known-hosts-edpm-deployment-nl7n4\" (UID: \"11befe83-a359-400c-b072-1778f7c29f74\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.393357 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nl7n4\" (UID: \"11befe83-a359-400c-b072-1778f7c29f74\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.393431 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nl7n4\" (UID: \"11befe83-a359-400c-b072-1778f7c29f74\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.393468 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxdl2\" (UniqueName: \"kubernetes.io/projected/11befe83-a359-400c-b072-1778f7c29f74-kube-api-access-sxdl2\") pod \"ssh-known-hosts-edpm-deployment-nl7n4\" (UID: \"11befe83-a359-400c-b072-1778f7c29f74\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" Feb 02 11:37:49 crc kubenswrapper[4925]: E0202 11:37:49.462732 4925 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda07f6a0e_2ed8_4213_be0f_ed8ae1005a14.slice/crio-b47e708429cdd74366530bcdfaf0b7b6a2076ac8974cef6600dee1a8687558c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda07f6a0e_2ed8_4213_be0f_ed8ae1005a14.slice\": RecentStats: unable to find data in memory cache]" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.495374 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-ceph\") pod \"ssh-known-hosts-edpm-deployment-nl7n4\" (UID: \"11befe83-a359-400c-b072-1778f7c29f74\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.495484 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nl7n4\" (UID: \"11befe83-a359-400c-b072-1778f7c29f74\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.495571 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nl7n4\" (UID: \"11befe83-a359-400c-b072-1778f7c29f74\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.495597 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxdl2\" (UniqueName: \"kubernetes.io/projected/11befe83-a359-400c-b072-1778f7c29f74-kube-api-access-sxdl2\") pod \"ssh-known-hosts-edpm-deployment-nl7n4\" (UID: \"11befe83-a359-400c-b072-1778f7c29f74\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.500846 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nl7n4\" (UID: \"11befe83-a359-400c-b072-1778f7c29f74\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.500891 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-ceph\") pod \"ssh-known-hosts-edpm-deployment-nl7n4\" (UID: \"11befe83-a359-400c-b072-1778f7c29f74\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.501245 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nl7n4\" (UID: \"11befe83-a359-400c-b072-1778f7c29f74\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.515462 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxdl2\" (UniqueName: \"kubernetes.io/projected/11befe83-a359-400c-b072-1778f7c29f74-kube-api-access-sxdl2\") pod \"ssh-known-hosts-edpm-deployment-nl7n4\" (UID: \"11befe83-a359-400c-b072-1778f7c29f74\") " pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" Feb 02 11:37:49 crc kubenswrapper[4925]: I0202 11:37:49.658634 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" Feb 02 11:37:50 crc kubenswrapper[4925]: I0202 11:37:50.194406 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nl7n4"] Feb 02 11:37:50 crc kubenswrapper[4925]: I0202 11:37:50.270058 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" event={"ID":"11befe83-a359-400c-b072-1778f7c29f74","Type":"ContainerStarted","Data":"84cb2c486b15b841e3712ec4d21b2777ca44fac71795c7a5a4ce0758a0fa2ae2"} Feb 02 11:37:51 crc kubenswrapper[4925]: I0202 11:37:51.278440 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" event={"ID":"11befe83-a359-400c-b072-1778f7c29f74","Type":"ContainerStarted","Data":"ed8efc4e7c969ea45b09db5d772999845f090aa1aa33430d560da9ea470e6abc"} Feb 02 11:37:51 crc kubenswrapper[4925]: I0202 11:37:51.297445 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" podStartSLOduration=1.860115901 podStartE2EDuration="2.297426972s" podCreationTimestamp="2026-02-02 11:37:49 +0000 UTC" firstStartedPulling="2026-02-02 11:37:50.19736575 +0000 UTC m=+2447.201614722" lastFinishedPulling="2026-02-02 11:37:50.634676831 +0000 UTC m=+2447.638925793" observedRunningTime="2026-02-02 11:37:51.292771676 +0000 UTC m=+2448.297020638" watchObservedRunningTime="2026-02-02 11:37:51.297426972 +0000 UTC m=+2448.301675934" Feb 02 11:37:59 crc kubenswrapper[4925]: I0202 11:37:59.353759 4925 generic.go:334] "Generic (PLEG): container finished" podID="11befe83-a359-400c-b072-1778f7c29f74" containerID="ed8efc4e7c969ea45b09db5d772999845f090aa1aa33430d560da9ea470e6abc" exitCode=0 Feb 02 11:37:59 crc kubenswrapper[4925]: I0202 11:37:59.353833 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" event={"ID":"11befe83-a359-400c-b072-1778f7c29f74","Type":"ContainerDied","Data":"ed8efc4e7c969ea45b09db5d772999845f090aa1aa33430d560da9ea470e6abc"} Feb 02 11:37:59 crc kubenswrapper[4925]: I0202 11:37:59.665053 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:37:59 crc kubenswrapper[4925]: E0202 11:37:59.665650 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:38:00 crc kubenswrapper[4925]: I0202 11:38:00.807422 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" Feb 02 11:38:00 crc kubenswrapper[4925]: I0202 11:38:00.910688 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-inventory-0\") pod \"11befe83-a359-400c-b072-1778f7c29f74\" (UID: \"11befe83-a359-400c-b072-1778f7c29f74\") " Feb 02 11:38:00 crc kubenswrapper[4925]: I0202 11:38:00.910770 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-ssh-key-openstack-edpm-ipam\") pod \"11befe83-a359-400c-b072-1778f7c29f74\" (UID: \"11befe83-a359-400c-b072-1778f7c29f74\") " Feb 02 11:38:00 crc kubenswrapper[4925]: I0202 11:38:00.911018 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxdl2\" (UniqueName: \"kubernetes.io/projected/11befe83-a359-400c-b072-1778f7c29f74-kube-api-access-sxdl2\") pod \"11befe83-a359-400c-b072-1778f7c29f74\" (UID: \"11befe83-a359-400c-b072-1778f7c29f74\") " Feb 02 11:38:00 crc kubenswrapper[4925]: I0202 11:38:00.911063 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-ceph\") pod \"11befe83-a359-400c-b072-1778f7c29f74\" (UID: \"11befe83-a359-400c-b072-1778f7c29f74\") " Feb 02 11:38:00 crc kubenswrapper[4925]: I0202 11:38:00.920265 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-ceph" (OuterVolumeSpecName: "ceph") pod "11befe83-a359-400c-b072-1778f7c29f74" (UID: "11befe83-a359-400c-b072-1778f7c29f74"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:00 crc kubenswrapper[4925]: I0202 11:38:00.920260 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11befe83-a359-400c-b072-1778f7c29f74-kube-api-access-sxdl2" (OuterVolumeSpecName: "kube-api-access-sxdl2") pod "11befe83-a359-400c-b072-1778f7c29f74" (UID: "11befe83-a359-400c-b072-1778f7c29f74"). InnerVolumeSpecName "kube-api-access-sxdl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:38:00 crc kubenswrapper[4925]: I0202 11:38:00.939288 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "11befe83-a359-400c-b072-1778f7c29f74" (UID: "11befe83-a359-400c-b072-1778f7c29f74"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:00 crc kubenswrapper[4925]: I0202 11:38:00.939736 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "11befe83-a359-400c-b072-1778f7c29f74" (UID: "11befe83-a359-400c-b072-1778f7c29f74"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.013506 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxdl2\" (UniqueName: \"kubernetes.io/projected/11befe83-a359-400c-b072-1778f7c29f74-kube-api-access-sxdl2\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.013553 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.013570 4925 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.013582 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11befe83-a359-400c-b072-1778f7c29f74-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.392193 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" event={"ID":"11befe83-a359-400c-b072-1778f7c29f74","Type":"ContainerDied","Data":"84cb2c486b15b841e3712ec4d21b2777ca44fac71795c7a5a4ce0758a0fa2ae2"} Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.392261 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84cb2c486b15b841e3712ec4d21b2777ca44fac71795c7a5a4ce0758a0fa2ae2" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.392365 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nl7n4" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.463561 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg"] Feb 02 11:38:01 crc kubenswrapper[4925]: E0202 11:38:01.464242 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11befe83-a359-400c-b072-1778f7c29f74" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.464261 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="11befe83-a359-400c-b072-1778f7c29f74" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.464455 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="11befe83-a359-400c-b072-1778f7c29f74" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.465020 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.468590 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.468773 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.468966 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.469114 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.470372 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.471710 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg"] Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.521446 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qlk5\" (UniqueName: \"kubernetes.io/projected/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-kube-api-access-9qlk5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rdnbg\" (UID: \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.521487 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rdnbg\" (UID: \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.521523 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rdnbg\" (UID: \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.521542 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rdnbg\" (UID: \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.622958 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qlk5\" (UniqueName: \"kubernetes.io/projected/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-kube-api-access-9qlk5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rdnbg\" (UID: \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.623019 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rdnbg\" (UID: \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.623072 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rdnbg\" (UID: \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.623126 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rdnbg\" (UID: \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.627628 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rdnbg\" (UID: \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.630869 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rdnbg\" (UID: \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.631616 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rdnbg\" (UID: \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.643304 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qlk5\" (UniqueName: \"kubernetes.io/projected/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-kube-api-access-9qlk5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rdnbg\" (UID: \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" Feb 02 11:38:01 crc kubenswrapper[4925]: I0202 11:38:01.786188 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" Feb 02 11:38:02 crc kubenswrapper[4925]: I0202 11:38:02.321696 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg"] Feb 02 11:38:02 crc kubenswrapper[4925]: W0202 11:38:02.326158 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd7acac2_73fe_4a28_853a_8455a8b7ddcc.slice/crio-23602f2f68064e98949d966f26a80474ec745e8693c12b56c3e7d94f897a41a6 WatchSource:0}: Error finding container 23602f2f68064e98949d966f26a80474ec745e8693c12b56c3e7d94f897a41a6: Status 404 returned error can't find the container with id 23602f2f68064e98949d966f26a80474ec745e8693c12b56c3e7d94f897a41a6 Feb 02 11:38:02 crc kubenswrapper[4925]: I0202 11:38:02.404283 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" event={"ID":"dd7acac2-73fe-4a28-853a-8455a8b7ddcc","Type":"ContainerStarted","Data":"23602f2f68064e98949d966f26a80474ec745e8693c12b56c3e7d94f897a41a6"} Feb 02 11:38:03 crc kubenswrapper[4925]: I0202 11:38:03.416158 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" event={"ID":"dd7acac2-73fe-4a28-853a-8455a8b7ddcc","Type":"ContainerStarted","Data":"be2ffe869066de09494db279bccead7232f1b6e3e0dc6c195699b9402a0dc5e8"} Feb 02 11:38:03 crc kubenswrapper[4925]: I0202 11:38:03.442695 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" podStartSLOduration=2.01699207 podStartE2EDuration="2.442672804s" podCreationTimestamp="2026-02-02 11:38:01 +0000 UTC" firstStartedPulling="2026-02-02 11:38:02.328605751 +0000 UTC m=+2459.332854713" lastFinishedPulling="2026-02-02 11:38:02.754286485 +0000 UTC m=+2459.758535447" observedRunningTime="2026-02-02 11:38:03.434261265 +0000 UTC m=+2460.438510237" watchObservedRunningTime="2026-02-02 11:38:03.442672804 +0000 UTC m=+2460.446921776" Feb 02 11:38:10 crc kubenswrapper[4925]: I0202 11:38:10.466035 4925 generic.go:334] "Generic (PLEG): container finished" podID="dd7acac2-73fe-4a28-853a-8455a8b7ddcc" containerID="be2ffe869066de09494db279bccead7232f1b6e3e0dc6c195699b9402a0dc5e8" exitCode=0 Feb 02 11:38:10 crc kubenswrapper[4925]: I0202 11:38:10.466112 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" event={"ID":"dd7acac2-73fe-4a28-853a-8455a8b7ddcc","Type":"ContainerDied","Data":"be2ffe869066de09494db279bccead7232f1b6e3e0dc6c195699b9402a0dc5e8"} Feb 02 11:38:11 crc kubenswrapper[4925]: I0202 11:38:11.664566 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:38:11 crc kubenswrapper[4925]: E0202 11:38:11.665160 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:38:11 crc kubenswrapper[4925]: I0202 11:38:11.857480 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" Feb 02 11:38:11 crc kubenswrapper[4925]: I0202 11:38:11.908662 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-inventory\") pod \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\" (UID: \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\") " Feb 02 11:38:11 crc kubenswrapper[4925]: I0202 11:38:11.908738 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-ssh-key-openstack-edpm-ipam\") pod \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\" (UID: \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\") " Feb 02 11:38:11 crc kubenswrapper[4925]: I0202 11:38:11.908758 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-ceph\") pod \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\" (UID: \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\") " Feb 02 11:38:11 crc kubenswrapper[4925]: I0202 11:38:11.908807 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qlk5\" (UniqueName: \"kubernetes.io/projected/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-kube-api-access-9qlk5\") pod \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\" (UID: \"dd7acac2-73fe-4a28-853a-8455a8b7ddcc\") " Feb 02 11:38:11 crc kubenswrapper[4925]: I0202 11:38:11.916233 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-ceph" (OuterVolumeSpecName: "ceph") pod "dd7acac2-73fe-4a28-853a-8455a8b7ddcc" (UID: "dd7acac2-73fe-4a28-853a-8455a8b7ddcc"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:11 crc kubenswrapper[4925]: I0202 11:38:11.916306 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-kube-api-access-9qlk5" (OuterVolumeSpecName: "kube-api-access-9qlk5") pod "dd7acac2-73fe-4a28-853a-8455a8b7ddcc" (UID: "dd7acac2-73fe-4a28-853a-8455a8b7ddcc"). InnerVolumeSpecName "kube-api-access-9qlk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:38:11 crc kubenswrapper[4925]: I0202 11:38:11.941952 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-inventory" (OuterVolumeSpecName: "inventory") pod "dd7acac2-73fe-4a28-853a-8455a8b7ddcc" (UID: "dd7acac2-73fe-4a28-853a-8455a8b7ddcc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:11 crc kubenswrapper[4925]: I0202 11:38:11.943279 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dd7acac2-73fe-4a28-853a-8455a8b7ddcc" (UID: "dd7acac2-73fe-4a28-853a-8455a8b7ddcc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.012155 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.012200 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.012223 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.012240 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qlk5\" (UniqueName: \"kubernetes.io/projected/dd7acac2-73fe-4a28-853a-8455a8b7ddcc-kube-api-access-9qlk5\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.482433 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" event={"ID":"dd7acac2-73fe-4a28-853a-8455a8b7ddcc","Type":"ContainerDied","Data":"23602f2f68064e98949d966f26a80474ec745e8693c12b56c3e7d94f897a41a6"} Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.482483 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23602f2f68064e98949d966f26a80474ec745e8693c12b56c3e7d94f897a41a6" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.482485 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rdnbg" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.552384 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx"] Feb 02 11:38:12 crc kubenswrapper[4925]: E0202 11:38:12.553023 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7acac2-73fe-4a28-853a-8455a8b7ddcc" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.553047 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7acac2-73fe-4a28-853a-8455a8b7ddcc" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.553270 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd7acac2-73fe-4a28-853a-8455a8b7ddcc" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.554036 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.557388 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.557633 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.557899 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.558121 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.559878 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.562454 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx"] Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.622043 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx\" (UID: \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.622135 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx\" (UID: \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.622526 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr4pb\" (UniqueName: \"kubernetes.io/projected/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-kube-api-access-hr4pb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx\" (UID: \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.622593 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx\" (UID: \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.724166 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr4pb\" (UniqueName: \"kubernetes.io/projected/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-kube-api-access-hr4pb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx\" (UID: \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.724220 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx\" (UID: \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.724319 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx\" (UID: \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.724359 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx\" (UID: \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.729479 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx\" (UID: \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.729530 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx\" (UID: \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.729806 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx\" (UID: \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.750429 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr4pb\" (UniqueName: \"kubernetes.io/projected/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-kube-api-access-hr4pb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx\" (UID: \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" Feb 02 11:38:12 crc kubenswrapper[4925]: I0202 11:38:12.880252 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" Feb 02 11:38:13 crc kubenswrapper[4925]: I0202 11:38:13.386139 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx"] Feb 02 11:38:13 crc kubenswrapper[4925]: I0202 11:38:13.492610 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" event={"ID":"4c793349-e8e5-419c-9e2c-4d3e4dd7500c","Type":"ContainerStarted","Data":"a6853401c0820869d675a889538570161693dd1e3c6917cb33e5187f73d2a003"} Feb 02 11:38:14 crc kubenswrapper[4925]: I0202 11:38:14.501577 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" event={"ID":"4c793349-e8e5-419c-9e2c-4d3e4dd7500c","Type":"ContainerStarted","Data":"21917892517daf5b9f786b5bbd078e09170adb89e83c75266fd549d1729fd29d"} Feb 02 11:38:23 crc kubenswrapper[4925]: I0202 11:38:23.709970 4925 generic.go:334] "Generic (PLEG): container finished" podID="4c793349-e8e5-419c-9e2c-4d3e4dd7500c" containerID="21917892517daf5b9f786b5bbd078e09170adb89e83c75266fd549d1729fd29d" exitCode=0 Feb 02 11:38:23 crc kubenswrapper[4925]: I0202 11:38:23.710059 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" event={"ID":"4c793349-e8e5-419c-9e2c-4d3e4dd7500c","Type":"ContainerDied","Data":"21917892517daf5b9f786b5bbd078e09170adb89e83c75266fd549d1729fd29d"} Feb 02 11:38:24 crc kubenswrapper[4925]: I0202 11:38:24.670266 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:38:24 crc kubenswrapper[4925]: E0202 11:38:24.670529 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.156159 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.358339 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-ceph\") pod \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\" (UID: \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\") " Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.358489 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr4pb\" (UniqueName: \"kubernetes.io/projected/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-kube-api-access-hr4pb\") pod \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\" (UID: \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\") " Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.358522 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-inventory\") pod \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\" (UID: \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\") " Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.358549 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-ssh-key-openstack-edpm-ipam\") pod \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\" (UID: \"4c793349-e8e5-419c-9e2c-4d3e4dd7500c\") " Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.364340 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-ceph" (OuterVolumeSpecName: "ceph") pod "4c793349-e8e5-419c-9e2c-4d3e4dd7500c" (UID: "4c793349-e8e5-419c-9e2c-4d3e4dd7500c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.364418 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-kube-api-access-hr4pb" (OuterVolumeSpecName: "kube-api-access-hr4pb") pod "4c793349-e8e5-419c-9e2c-4d3e4dd7500c" (UID: "4c793349-e8e5-419c-9e2c-4d3e4dd7500c"). InnerVolumeSpecName "kube-api-access-hr4pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.384495 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-inventory" (OuterVolumeSpecName: "inventory") pod "4c793349-e8e5-419c-9e2c-4d3e4dd7500c" (UID: "4c793349-e8e5-419c-9e2c-4d3e4dd7500c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.385011 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4c793349-e8e5-419c-9e2c-4d3e4dd7500c" (UID: "4c793349-e8e5-419c-9e2c-4d3e4dd7500c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.460684 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr4pb\" (UniqueName: \"kubernetes.io/projected/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-kube-api-access-hr4pb\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.460727 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.460737 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.460746 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4c793349-e8e5-419c-9e2c-4d3e4dd7500c-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.726109 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" event={"ID":"4c793349-e8e5-419c-9e2c-4d3e4dd7500c","Type":"ContainerDied","Data":"a6853401c0820869d675a889538570161693dd1e3c6917cb33e5187f73d2a003"} Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.726139 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.726161 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6853401c0820869d675a889538570161693dd1e3c6917cb33e5187f73d2a003" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.815183 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg"] Feb 02 11:38:25 crc kubenswrapper[4925]: E0202 11:38:25.815596 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c793349-e8e5-419c-9e2c-4d3e4dd7500c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.815622 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c793349-e8e5-419c-9e2c-4d3e4dd7500c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.815795 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c793349-e8e5-419c-9e2c-4d3e4dd7500c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.816346 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.819029 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.819175 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.820733 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.821381 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.821411 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.821744 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.822299 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.822645 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.828937 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg"] Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.868251 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw86l\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-kube-api-access-vw86l\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.868596 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.868723 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.868864 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.868940 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.868984 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.869010 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.869097 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.869131 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.869159 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.869206 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.869243 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.869283 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.970279 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.970616 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.970739 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.970835 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.970957 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.971102 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.971225 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw86l\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-kube-api-access-vw86l\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.971357 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.971457 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.971584 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.971715 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.971841 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.971950 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.974593 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.974676 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.975883 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.975997 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.976677 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.976927 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.977358 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.977919 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.978116 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.982805 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.983645 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.988759 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:25 crc kubenswrapper[4925]: I0202 11:38:25.990395 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw86l\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-kube-api-access-vw86l\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:26 crc kubenswrapper[4925]: I0202 11:38:26.133867 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:26 crc kubenswrapper[4925]: I0202 11:38:26.632597 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg"] Feb 02 11:38:26 crc kubenswrapper[4925]: W0202 11:38:26.636929 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod749615c6_2bdb_4b47_aced_b8dcb3041df6.slice/crio-8053e5bf4b446d39285a61166f5ffe246aedb0b620255f3d9df61375b503ba97 WatchSource:0}: Error finding container 8053e5bf4b446d39285a61166f5ffe246aedb0b620255f3d9df61375b503ba97: Status 404 returned error can't find the container with id 8053e5bf4b446d39285a61166f5ffe246aedb0b620255f3d9df61375b503ba97 Feb 02 11:38:26 crc kubenswrapper[4925]: I0202 11:38:26.739325 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" event={"ID":"749615c6-2bdb-4b47-aced-b8dcb3041df6","Type":"ContainerStarted","Data":"8053e5bf4b446d39285a61166f5ffe246aedb0b620255f3d9df61375b503ba97"} Feb 02 11:38:27 crc kubenswrapper[4925]: I0202 11:38:27.747904 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" event={"ID":"749615c6-2bdb-4b47-aced-b8dcb3041df6","Type":"ContainerStarted","Data":"07f601120af8d1c5da54babea4318d2962a199600b7e4e27fb251c4a9422ef9f"} Feb 02 11:38:27 crc kubenswrapper[4925]: I0202 11:38:27.781126 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" podStartSLOduration=2.357294989 podStartE2EDuration="2.781104083s" podCreationTimestamp="2026-02-02 11:38:25 +0000 UTC" firstStartedPulling="2026-02-02 11:38:26.639397186 +0000 UTC m=+2483.643646148" lastFinishedPulling="2026-02-02 11:38:27.06320628 +0000 UTC m=+2484.067455242" observedRunningTime="2026-02-02 11:38:27.770829973 +0000 UTC m=+2484.775078935" watchObservedRunningTime="2026-02-02 11:38:27.781104083 +0000 UTC m=+2484.785353045" Feb 02 11:38:39 crc kubenswrapper[4925]: I0202 11:38:39.664758 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:38:39 crc kubenswrapper[4925]: E0202 11:38:39.665572 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:38:53 crc kubenswrapper[4925]: I0202 11:38:53.664960 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:38:53 crc kubenswrapper[4925]: E0202 11:38:53.665984 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:38:57 crc kubenswrapper[4925]: I0202 11:38:56.999845 4925 generic.go:334] "Generic (PLEG): container finished" podID="749615c6-2bdb-4b47-aced-b8dcb3041df6" containerID="07f601120af8d1c5da54babea4318d2962a199600b7e4e27fb251c4a9422ef9f" exitCode=0 Feb 02 11:38:57 crc kubenswrapper[4925]: I0202 11:38:56.999924 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" event={"ID":"749615c6-2bdb-4b47-aced-b8dcb3041df6","Type":"ContainerDied","Data":"07f601120af8d1c5da54babea4318d2962a199600b7e4e27fb251c4a9422ef9f"} Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.418600 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.592982 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"749615c6-2bdb-4b47-aced-b8dcb3041df6\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.593029 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ovn-combined-ca-bundle\") pod \"749615c6-2bdb-4b47-aced-b8dcb3041df6\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.593051 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw86l\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-kube-api-access-vw86l\") pod \"749615c6-2bdb-4b47-aced-b8dcb3041df6\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.593098 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-bootstrap-combined-ca-bundle\") pod \"749615c6-2bdb-4b47-aced-b8dcb3041df6\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.593150 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ceph\") pod \"749615c6-2bdb-4b47-aced-b8dcb3041df6\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.593186 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-libvirt-combined-ca-bundle\") pod \"749615c6-2bdb-4b47-aced-b8dcb3041df6\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.593209 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ssh-key-openstack-edpm-ipam\") pod \"749615c6-2bdb-4b47-aced-b8dcb3041df6\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.593243 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-repo-setup-combined-ca-bundle\") pod \"749615c6-2bdb-4b47-aced-b8dcb3041df6\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.593283 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-nova-combined-ca-bundle\") pod \"749615c6-2bdb-4b47-aced-b8dcb3041df6\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.593321 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-neutron-metadata-combined-ca-bundle\") pod \"749615c6-2bdb-4b47-aced-b8dcb3041df6\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.593343 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-inventory\") pod \"749615c6-2bdb-4b47-aced-b8dcb3041df6\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.593414 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"749615c6-2bdb-4b47-aced-b8dcb3041df6\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.593462 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"749615c6-2bdb-4b47-aced-b8dcb3041df6\" (UID: \"749615c6-2bdb-4b47-aced-b8dcb3041df6\") " Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.599092 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "749615c6-2bdb-4b47-aced-b8dcb3041df6" (UID: "749615c6-2bdb-4b47-aced-b8dcb3041df6"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.599516 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "749615c6-2bdb-4b47-aced-b8dcb3041df6" (UID: "749615c6-2bdb-4b47-aced-b8dcb3041df6"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.599592 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ceph" (OuterVolumeSpecName: "ceph") pod "749615c6-2bdb-4b47-aced-b8dcb3041df6" (UID: "749615c6-2bdb-4b47-aced-b8dcb3041df6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.599606 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "749615c6-2bdb-4b47-aced-b8dcb3041df6" (UID: "749615c6-2bdb-4b47-aced-b8dcb3041df6"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.600877 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "749615c6-2bdb-4b47-aced-b8dcb3041df6" (UID: "749615c6-2bdb-4b47-aced-b8dcb3041df6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.600906 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-kube-api-access-vw86l" (OuterVolumeSpecName: "kube-api-access-vw86l") pod "749615c6-2bdb-4b47-aced-b8dcb3041df6" (UID: "749615c6-2bdb-4b47-aced-b8dcb3041df6"). InnerVolumeSpecName "kube-api-access-vw86l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.602225 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "749615c6-2bdb-4b47-aced-b8dcb3041df6" (UID: "749615c6-2bdb-4b47-aced-b8dcb3041df6"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.602918 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "749615c6-2bdb-4b47-aced-b8dcb3041df6" (UID: "749615c6-2bdb-4b47-aced-b8dcb3041df6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.603776 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "749615c6-2bdb-4b47-aced-b8dcb3041df6" (UID: "749615c6-2bdb-4b47-aced-b8dcb3041df6"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.604460 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "749615c6-2bdb-4b47-aced-b8dcb3041df6" (UID: "749615c6-2bdb-4b47-aced-b8dcb3041df6"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.610860 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "749615c6-2bdb-4b47-aced-b8dcb3041df6" (UID: "749615c6-2bdb-4b47-aced-b8dcb3041df6"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.623330 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-inventory" (OuterVolumeSpecName: "inventory") pod "749615c6-2bdb-4b47-aced-b8dcb3041df6" (UID: "749615c6-2bdb-4b47-aced-b8dcb3041df6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.631831 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "749615c6-2bdb-4b47-aced-b8dcb3041df6" (UID: "749615c6-2bdb-4b47-aced-b8dcb3041df6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.699236 4925 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.699311 4925 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.699340 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw86l\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-kube-api-access-vw86l\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.699360 4925 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.699381 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.699400 4925 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.699414 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.699428 4925 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.699442 4925 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.699456 4925 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.699470 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/749615c6-2bdb-4b47-aced-b8dcb3041df6-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.699491 4925 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:58 crc kubenswrapper[4925]: I0202 11:38:58.699515 4925 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/749615c6-2bdb-4b47-aced-b8dcb3041df6-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.014453 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" event={"ID":"749615c6-2bdb-4b47-aced-b8dcb3041df6","Type":"ContainerDied","Data":"8053e5bf4b446d39285a61166f5ffe246aedb0b620255f3d9df61375b503ba97"} Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.014496 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8053e5bf4b446d39285a61166f5ffe246aedb0b620255f3d9df61375b503ba97" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.014524 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.212023 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr"] Feb 02 11:38:59 crc kubenswrapper[4925]: E0202 11:38:59.212493 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="749615c6-2bdb-4b47-aced-b8dcb3041df6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.212516 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="749615c6-2bdb-4b47-aced-b8dcb3041df6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.212740 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="749615c6-2bdb-4b47-aced-b8dcb3041df6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.251438 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr"] Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.251547 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.254414 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.254797 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.255060 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.255712 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.259628 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.411337 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr\" (UID: \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.411408 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr\" (UID: \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.411431 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv296\" (UniqueName: \"kubernetes.io/projected/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-kube-api-access-rv296\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr\" (UID: \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.411645 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr\" (UID: \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.512931 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr\" (UID: \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.512973 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv296\" (UniqueName: \"kubernetes.io/projected/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-kube-api-access-rv296\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr\" (UID: \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.513040 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr\" (UID: \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.513127 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr\" (UID: \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.516980 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr\" (UID: \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.517449 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr\" (UID: \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.518698 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr\" (UID: \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.529571 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv296\" (UniqueName: \"kubernetes.io/projected/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-kube-api-access-rv296\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr\" (UID: \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" Feb 02 11:38:59 crc kubenswrapper[4925]: I0202 11:38:59.566793 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" Feb 02 11:39:00 crc kubenswrapper[4925]: I0202 11:39:00.077627 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr"] Feb 02 11:39:01 crc kubenswrapper[4925]: I0202 11:39:01.035170 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" event={"ID":"c87ae68d-67eb-45b4-8971-5d5d14d6c36b","Type":"ContainerStarted","Data":"58d1a33fdf7614d890a12576769d216ce259ee174cddca00bf0aff33fa909193"} Feb 02 11:39:01 crc kubenswrapper[4925]: I0202 11:39:01.035750 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" event={"ID":"c87ae68d-67eb-45b4-8971-5d5d14d6c36b","Type":"ContainerStarted","Data":"30ccb2f99a8cda991aa934fd783cad7c95d7797688bbf4db51bb32b3152b3d79"} Feb 02 11:39:01 crc kubenswrapper[4925]: I0202 11:39:01.058268 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" podStartSLOduration=1.6425282079999999 podStartE2EDuration="2.05822495s" podCreationTimestamp="2026-02-02 11:38:59 +0000 UTC" firstStartedPulling="2026-02-02 11:39:00.087750569 +0000 UTC m=+2517.091999521" lastFinishedPulling="2026-02-02 11:39:00.503447301 +0000 UTC m=+2517.507696263" observedRunningTime="2026-02-02 11:39:01.056590796 +0000 UTC m=+2518.060839758" watchObservedRunningTime="2026-02-02 11:39:01.05822495 +0000 UTC m=+2518.062473912" Feb 02 11:39:06 crc kubenswrapper[4925]: I0202 11:39:06.082010 4925 generic.go:334] "Generic (PLEG): container finished" podID="c87ae68d-67eb-45b4-8971-5d5d14d6c36b" containerID="58d1a33fdf7614d890a12576769d216ce259ee174cddca00bf0aff33fa909193" exitCode=0 Feb 02 11:39:06 crc kubenswrapper[4925]: I0202 11:39:06.082103 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" event={"ID":"c87ae68d-67eb-45b4-8971-5d5d14d6c36b","Type":"ContainerDied","Data":"58d1a33fdf7614d890a12576769d216ce259ee174cddca00bf0aff33fa909193"} Feb 02 11:39:07 crc kubenswrapper[4925]: I0202 11:39:07.481436 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" Feb 02 11:39:07 crc kubenswrapper[4925]: I0202 11:39:07.661538 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv296\" (UniqueName: \"kubernetes.io/projected/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-kube-api-access-rv296\") pod \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\" (UID: \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\") " Feb 02 11:39:07 crc kubenswrapper[4925]: I0202 11:39:07.661702 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-ssh-key-openstack-edpm-ipam\") pod \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\" (UID: \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\") " Feb 02 11:39:07 crc kubenswrapper[4925]: I0202 11:39:07.661738 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-ceph\") pod \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\" (UID: \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\") " Feb 02 11:39:07 crc kubenswrapper[4925]: I0202 11:39:07.662043 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-inventory\") pod \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\" (UID: \"c87ae68d-67eb-45b4-8971-5d5d14d6c36b\") " Feb 02 11:39:07 crc kubenswrapper[4925]: I0202 11:39:07.664071 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:39:07 crc kubenswrapper[4925]: E0202 11:39:07.664762 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:39:07 crc kubenswrapper[4925]: I0202 11:39:07.668269 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-ceph" (OuterVolumeSpecName: "ceph") pod "c87ae68d-67eb-45b4-8971-5d5d14d6c36b" (UID: "c87ae68d-67eb-45b4-8971-5d5d14d6c36b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:39:07 crc kubenswrapper[4925]: I0202 11:39:07.669105 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-kube-api-access-rv296" (OuterVolumeSpecName: "kube-api-access-rv296") pod "c87ae68d-67eb-45b4-8971-5d5d14d6c36b" (UID: "c87ae68d-67eb-45b4-8971-5d5d14d6c36b"). InnerVolumeSpecName "kube-api-access-rv296". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:39:07 crc kubenswrapper[4925]: I0202 11:39:07.689595 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-inventory" (OuterVolumeSpecName: "inventory") pod "c87ae68d-67eb-45b4-8971-5d5d14d6c36b" (UID: "c87ae68d-67eb-45b4-8971-5d5d14d6c36b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:39:07 crc kubenswrapper[4925]: I0202 11:39:07.699246 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c87ae68d-67eb-45b4-8971-5d5d14d6c36b" (UID: "c87ae68d-67eb-45b4-8971-5d5d14d6c36b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:39:07 crc kubenswrapper[4925]: I0202 11:39:07.764416 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv296\" (UniqueName: \"kubernetes.io/projected/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-kube-api-access-rv296\") on node \"crc\" DevicePath \"\"" Feb 02 11:39:07 crc kubenswrapper[4925]: I0202 11:39:07.764585 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:39:07 crc kubenswrapper[4925]: I0202 11:39:07.764625 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:39:07 crc kubenswrapper[4925]: I0202 11:39:07.764635 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c87ae68d-67eb-45b4-8971-5d5d14d6c36b-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.098936 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" event={"ID":"c87ae68d-67eb-45b4-8971-5d5d14d6c36b","Type":"ContainerDied","Data":"30ccb2f99a8cda991aa934fd783cad7c95d7797688bbf4db51bb32b3152b3d79"} Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.098975 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30ccb2f99a8cda991aa934fd783cad7c95d7797688bbf4db51bb32b3152b3d79" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.099032 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.170429 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb"] Feb 02 11:39:08 crc kubenswrapper[4925]: E0202 11:39:08.170786 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c87ae68d-67eb-45b4-8971-5d5d14d6c36b" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.170807 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="c87ae68d-67eb-45b4-8971-5d5d14d6c36b" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.170970 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="c87ae68d-67eb-45b4-8971-5d5d14d6c36b" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.171518 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.173588 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.175802 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.179297 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.179304 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.179477 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.179489 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.189028 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb"] Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.272148 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.272205 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.272235 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.272253 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.272279 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhgq9\" (UniqueName: \"kubernetes.io/projected/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-kube-api-access-lhgq9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.272485 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.373915 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.374324 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.374351 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.374378 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.374397 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.374424 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhgq9\" (UniqueName: \"kubernetes.io/projected/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-kube-api-access-lhgq9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.375656 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.379903 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.380170 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.380837 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.387497 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.396716 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhgq9\" (UniqueName: \"kubernetes.io/projected/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-kube-api-access-lhgq9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sv6hb\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:08 crc kubenswrapper[4925]: I0202 11:39:08.489889 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:39:09 crc kubenswrapper[4925]: I0202 11:39:09.159169 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb"] Feb 02 11:39:10 crc kubenswrapper[4925]: I0202 11:39:10.113180 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" event={"ID":"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2","Type":"ContainerStarted","Data":"6cd72d581791f63655c5ab55e62031e8fa08fa0e84dbba49491d28211d732b6e"} Feb 02 11:39:10 crc kubenswrapper[4925]: I0202 11:39:10.113508 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" event={"ID":"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2","Type":"ContainerStarted","Data":"7956fd46b0334c24547d35c9ddca054f469d62fb9e62d940b038ad2520bf49a8"} Feb 02 11:39:10 crc kubenswrapper[4925]: I0202 11:39:10.137345 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" podStartSLOduration=1.6402485 podStartE2EDuration="2.137327334s" podCreationTimestamp="2026-02-02 11:39:08 +0000 UTC" firstStartedPulling="2026-02-02 11:39:09.15841338 +0000 UTC m=+2526.162662332" lastFinishedPulling="2026-02-02 11:39:09.655492204 +0000 UTC m=+2526.659741166" observedRunningTime="2026-02-02 11:39:10.13054951 +0000 UTC m=+2527.134798482" watchObservedRunningTime="2026-02-02 11:39:10.137327334 +0000 UTC m=+2527.141576296" Feb 02 11:39:18 crc kubenswrapper[4925]: I0202 11:39:18.664749 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:39:18 crc kubenswrapper[4925]: E0202 11:39:18.665957 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:39:32 crc kubenswrapper[4925]: I0202 11:39:32.665002 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:39:32 crc kubenswrapper[4925]: E0202 11:39:32.665846 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:39:46 crc kubenswrapper[4925]: I0202 11:39:46.665351 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:39:46 crc kubenswrapper[4925]: E0202 11:39:46.666159 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:40:01 crc kubenswrapper[4925]: I0202 11:40:01.664893 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:40:01 crc kubenswrapper[4925]: E0202 11:40:01.665855 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:40:13 crc kubenswrapper[4925]: I0202 11:40:13.646278 4925 generic.go:334] "Generic (PLEG): container finished" podID="9b644239-1d8a-4dd1-96ab-6125f8ccb4e2" containerID="6cd72d581791f63655c5ab55e62031e8fa08fa0e84dbba49491d28211d732b6e" exitCode=0 Feb 02 11:40:13 crc kubenswrapper[4925]: I0202 11:40:13.646468 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" event={"ID":"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2","Type":"ContainerDied","Data":"6cd72d581791f63655c5ab55e62031e8fa08fa0e84dbba49491d28211d732b6e"} Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.102900 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.166887 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ovn-combined-ca-bundle\") pod \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.166943 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-inventory\") pod \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.167022 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhgq9\" (UniqueName: \"kubernetes.io/projected/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-kube-api-access-lhgq9\") pod \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.167040 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ceph\") pod \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.167059 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ssh-key-openstack-edpm-ipam\") pod \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.167131 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ovncontroller-config-0\") pod \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\" (UID: \"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2\") " Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.172969 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-kube-api-access-lhgq9" (OuterVolumeSpecName: "kube-api-access-lhgq9") pod "9b644239-1d8a-4dd1-96ab-6125f8ccb4e2" (UID: "9b644239-1d8a-4dd1-96ab-6125f8ccb4e2"). InnerVolumeSpecName "kube-api-access-lhgq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.173297 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9b644239-1d8a-4dd1-96ab-6125f8ccb4e2" (UID: "9b644239-1d8a-4dd1-96ab-6125f8ccb4e2"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.176165 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ceph" (OuterVolumeSpecName: "ceph") pod "9b644239-1d8a-4dd1-96ab-6125f8ccb4e2" (UID: "9b644239-1d8a-4dd1-96ab-6125f8ccb4e2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.195602 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "9b644239-1d8a-4dd1-96ab-6125f8ccb4e2" (UID: "9b644239-1d8a-4dd1-96ab-6125f8ccb4e2"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.199228 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9b644239-1d8a-4dd1-96ab-6125f8ccb4e2" (UID: "9b644239-1d8a-4dd1-96ab-6125f8ccb4e2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.206102 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-inventory" (OuterVolumeSpecName: "inventory") pod "9b644239-1d8a-4dd1-96ab-6125f8ccb4e2" (UID: "9b644239-1d8a-4dd1-96ab-6125f8ccb4e2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.268737 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.268779 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.268789 4925 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.268798 4925 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.268812 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.268820 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhgq9\" (UniqueName: \"kubernetes.io/projected/9b644239-1d8a-4dd1-96ab-6125f8ccb4e2-kube-api-access-lhgq9\") on node \"crc\" DevicePath \"\"" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.667184 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" event={"ID":"9b644239-1d8a-4dd1-96ab-6125f8ccb4e2","Type":"ContainerDied","Data":"7956fd46b0334c24547d35c9ddca054f469d62fb9e62d940b038ad2520bf49a8"} Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.667226 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7956fd46b0334c24547d35c9ddca054f469d62fb9e62d940b038ad2520bf49a8" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.667280 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sv6hb" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.765246 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds"] Feb 02 11:40:15 crc kubenswrapper[4925]: E0202 11:40:15.765607 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b644239-1d8a-4dd1-96ab-6125f8ccb4e2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.765627 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b644239-1d8a-4dd1-96ab-6125f8ccb4e2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.765795 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b644239-1d8a-4dd1-96ab-6125f8ccb4e2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.766360 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.768864 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.769304 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.769409 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.769323 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.769611 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.769664 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.769929 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.805022 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds"] Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.879417 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.879668 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.879894 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.879930 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.880140 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.880211 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v42f\" (UniqueName: \"kubernetes.io/projected/865424ac-9ae9-45a6-9f69-b239f8d3d746-kube-api-access-7v42f\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.880241 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.981936 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.982010 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.982031 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.982065 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.982100 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v42f\" (UniqueName: \"kubernetes.io/projected/865424ac-9ae9-45a6-9f69-b239f8d3d746-kube-api-access-7v42f\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.982120 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.982166 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.986417 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.986509 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.986705 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.988735 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.993024 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:15 crc kubenswrapper[4925]: I0202 11:40:15.996818 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:16 crc kubenswrapper[4925]: I0202 11:40:16.002546 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v42f\" (UniqueName: \"kubernetes.io/projected/865424ac-9ae9-45a6-9f69-b239f8d3d746-kube-api-access-7v42f\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:16 crc kubenswrapper[4925]: I0202 11:40:16.086764 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:40:16 crc kubenswrapper[4925]: I0202 11:40:16.614904 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds"] Feb 02 11:40:16 crc kubenswrapper[4925]: I0202 11:40:16.665621 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:40:16 crc kubenswrapper[4925]: E0202 11:40:16.666003 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:40:16 crc kubenswrapper[4925]: I0202 11:40:16.675398 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" event={"ID":"865424ac-9ae9-45a6-9f69-b239f8d3d746","Type":"ContainerStarted","Data":"79669fb4d85541ba5d5c53f30dd3bf26df90a1db942691d53c7baa852ea44d45"} Feb 02 11:40:17 crc kubenswrapper[4925]: I0202 11:40:17.683991 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" event={"ID":"865424ac-9ae9-45a6-9f69-b239f8d3d746","Type":"ContainerStarted","Data":"fba4da19faa7329b5d13b0141b8a542572d18de25a3fe536b59c66df9e5842a1"} Feb 02 11:40:17 crc kubenswrapper[4925]: I0202 11:40:17.710063 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" podStartSLOduration=2.057406666 podStartE2EDuration="2.710042102s" podCreationTimestamp="2026-02-02 11:40:15 +0000 UTC" firstStartedPulling="2026-02-02 11:40:16.622260363 +0000 UTC m=+2593.626509325" lastFinishedPulling="2026-02-02 11:40:17.274895799 +0000 UTC m=+2594.279144761" observedRunningTime="2026-02-02 11:40:17.70261068 +0000 UTC m=+2594.706859642" watchObservedRunningTime="2026-02-02 11:40:17.710042102 +0000 UTC m=+2594.714291064" Feb 02 11:40:31 crc kubenswrapper[4925]: I0202 11:40:31.664922 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:40:31 crc kubenswrapper[4925]: E0202 11:40:31.667126 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:40:44 crc kubenswrapper[4925]: I0202 11:40:44.677740 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:40:44 crc kubenswrapper[4925]: E0202 11:40:44.678716 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:40:55 crc kubenswrapper[4925]: I0202 11:40:55.666234 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:40:55 crc kubenswrapper[4925]: E0202 11:40:55.667561 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:41:10 crc kubenswrapper[4925]: I0202 11:41:10.664835 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:41:10 crc kubenswrapper[4925]: E0202 11:41:10.665661 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:41:11 crc kubenswrapper[4925]: I0202 11:41:11.145622 4925 generic.go:334] "Generic (PLEG): container finished" podID="865424ac-9ae9-45a6-9f69-b239f8d3d746" containerID="fba4da19faa7329b5d13b0141b8a542572d18de25a3fe536b59c66df9e5842a1" exitCode=0 Feb 02 11:41:11 crc kubenswrapper[4925]: I0202 11:41:11.145673 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" event={"ID":"865424ac-9ae9-45a6-9f69-b239f8d3d746","Type":"ContainerDied","Data":"fba4da19faa7329b5d13b0141b8a542572d18de25a3fe536b59c66df9e5842a1"} Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.676471 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.784629 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-ceph\") pod \"865424ac-9ae9-45a6-9f69-b239f8d3d746\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.784669 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-nova-metadata-neutron-config-0\") pod \"865424ac-9ae9-45a6-9f69-b239f8d3d746\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.784718 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-neutron-metadata-combined-ca-bundle\") pod \"865424ac-9ae9-45a6-9f69-b239f8d3d746\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.784825 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-ssh-key-openstack-edpm-ipam\") pod \"865424ac-9ae9-45a6-9f69-b239f8d3d746\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.784879 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v42f\" (UniqueName: \"kubernetes.io/projected/865424ac-9ae9-45a6-9f69-b239f8d3d746-kube-api-access-7v42f\") pod \"865424ac-9ae9-45a6-9f69-b239f8d3d746\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.784923 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-neutron-ovn-metadata-agent-neutron-config-0\") pod \"865424ac-9ae9-45a6-9f69-b239f8d3d746\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.784967 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-inventory\") pod \"865424ac-9ae9-45a6-9f69-b239f8d3d746\" (UID: \"865424ac-9ae9-45a6-9f69-b239f8d3d746\") " Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.790627 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "865424ac-9ae9-45a6-9f69-b239f8d3d746" (UID: "865424ac-9ae9-45a6-9f69-b239f8d3d746"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.790825 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/865424ac-9ae9-45a6-9f69-b239f8d3d746-kube-api-access-7v42f" (OuterVolumeSpecName: "kube-api-access-7v42f") pod "865424ac-9ae9-45a6-9f69-b239f8d3d746" (UID: "865424ac-9ae9-45a6-9f69-b239f8d3d746"). InnerVolumeSpecName "kube-api-access-7v42f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.794239 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-ceph" (OuterVolumeSpecName: "ceph") pod "865424ac-9ae9-45a6-9f69-b239f8d3d746" (UID: "865424ac-9ae9-45a6-9f69-b239f8d3d746"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.811128 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "865424ac-9ae9-45a6-9f69-b239f8d3d746" (UID: "865424ac-9ae9-45a6-9f69-b239f8d3d746"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.812161 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "865424ac-9ae9-45a6-9f69-b239f8d3d746" (UID: "865424ac-9ae9-45a6-9f69-b239f8d3d746"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.814088 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "865424ac-9ae9-45a6-9f69-b239f8d3d746" (UID: "865424ac-9ae9-45a6-9f69-b239f8d3d746"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.816654 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-inventory" (OuterVolumeSpecName: "inventory") pod "865424ac-9ae9-45a6-9f69-b239f8d3d746" (UID: "865424ac-9ae9-45a6-9f69-b239f8d3d746"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.887444 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.887490 4925 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.887506 4925 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.887520 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.887532 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v42f\" (UniqueName: \"kubernetes.io/projected/865424ac-9ae9-45a6-9f69-b239f8d3d746-kube-api-access-7v42f\") on node \"crc\" DevicePath \"\"" Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.887546 4925 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:41:12 crc kubenswrapper[4925]: I0202 11:41:12.887559 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/865424ac-9ae9-45a6-9f69-b239f8d3d746-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.165458 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" event={"ID":"865424ac-9ae9-45a6-9f69-b239f8d3d746","Type":"ContainerDied","Data":"79669fb4d85541ba5d5c53f30dd3bf26df90a1db942691d53c7baa852ea44d45"} Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.165829 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79669fb4d85541ba5d5c53f30dd3bf26df90a1db942691d53c7baa852ea44d45" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.165502 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.288299 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct"] Feb 02 11:41:13 crc kubenswrapper[4925]: E0202 11:41:13.288619 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="865424ac-9ae9-45a6-9f69-b239f8d3d746" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.288635 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="865424ac-9ae9-45a6-9f69-b239f8d3d746" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.291593 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="865424ac-9ae9-45a6-9f69-b239f8d3d746" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.304826 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.320025 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.320169 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.320431 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.320641 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.320753 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.320785 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.330670 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct"] Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.404036 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.404121 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvm4p\" (UniqueName: \"kubernetes.io/projected/8e448d5d-ae77-439d-804b-eb4bea2a957d-kube-api-access-gvm4p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.404292 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.404363 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.404393 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.404417 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.505895 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.506238 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.506260 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.506278 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.506385 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.506405 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvm4p\" (UniqueName: \"kubernetes.io/projected/8e448d5d-ae77-439d-804b-eb4bea2a957d-kube-api-access-gvm4p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.510223 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.510351 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.510915 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.511002 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.516670 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.527451 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvm4p\" (UniqueName: \"kubernetes.io/projected/8e448d5d-ae77-439d-804b-eb4bea2a957d-kube-api-access-gvm4p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b5rct\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:13 crc kubenswrapper[4925]: I0202 11:41:13.633596 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:41:14 crc kubenswrapper[4925]: I0202 11:41:14.169720 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct"] Feb 02 11:41:14 crc kubenswrapper[4925]: I0202 11:41:14.176016 4925 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:41:15 crc kubenswrapper[4925]: I0202 11:41:15.186005 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" event={"ID":"8e448d5d-ae77-439d-804b-eb4bea2a957d","Type":"ContainerStarted","Data":"dea740f03c4af1ed7c3a5076469080bbaa88e801fbfffb73970e6bd1d9abcfb0"} Feb 02 11:41:15 crc kubenswrapper[4925]: I0202 11:41:15.186347 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" event={"ID":"8e448d5d-ae77-439d-804b-eb4bea2a957d","Type":"ContainerStarted","Data":"5d1e3f8f28c14b0c5f32335d5f0ddb061f47f8821fd56e13f1d3e90dd016914c"} Feb 02 11:41:15 crc kubenswrapper[4925]: I0202 11:41:15.214326 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" podStartSLOduration=1.783749026 podStartE2EDuration="2.214303214s" podCreationTimestamp="2026-02-02 11:41:13 +0000 UTC" firstStartedPulling="2026-02-02 11:41:14.175794312 +0000 UTC m=+2651.180043274" lastFinishedPulling="2026-02-02 11:41:14.6063485 +0000 UTC m=+2651.610597462" observedRunningTime="2026-02-02 11:41:15.204872308 +0000 UTC m=+2652.209121280" watchObservedRunningTime="2026-02-02 11:41:15.214303214 +0000 UTC m=+2652.218552196" Feb 02 11:41:24 crc kubenswrapper[4925]: I0202 11:41:24.669413 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:41:25 crc kubenswrapper[4925]: I0202 11:41:25.267169 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"40cf3ac2a0ac9b7206f6541b854f2b61cc2451fe97ce5ef5864cb5666fd27668"} Feb 02 11:43:26 crc kubenswrapper[4925]: I0202 11:43:26.833483 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-56477"] Feb 02 11:43:26 crc kubenswrapper[4925]: I0202 11:43:26.837519 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56477" Feb 02 11:43:26 crc kubenswrapper[4925]: I0202 11:43:26.845875 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-56477"] Feb 02 11:43:26 crc kubenswrapper[4925]: I0202 11:43:26.991928 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-utilities\") pod \"redhat-marketplace-56477\" (UID: \"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a\") " pod="openshift-marketplace/redhat-marketplace-56477" Feb 02 11:43:26 crc kubenswrapper[4925]: I0202 11:43:26.992257 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqccp\" (UniqueName: \"kubernetes.io/projected/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-kube-api-access-kqccp\") pod \"redhat-marketplace-56477\" (UID: \"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a\") " pod="openshift-marketplace/redhat-marketplace-56477" Feb 02 11:43:26 crc kubenswrapper[4925]: I0202 11:43:26.992525 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-catalog-content\") pod \"redhat-marketplace-56477\" (UID: \"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a\") " pod="openshift-marketplace/redhat-marketplace-56477" Feb 02 11:43:27 crc kubenswrapper[4925]: I0202 11:43:27.110388 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqccp\" (UniqueName: \"kubernetes.io/projected/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-kube-api-access-kqccp\") pod \"redhat-marketplace-56477\" (UID: \"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a\") " pod="openshift-marketplace/redhat-marketplace-56477" Feb 02 11:43:27 crc kubenswrapper[4925]: I0202 11:43:27.110554 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-catalog-content\") pod \"redhat-marketplace-56477\" (UID: \"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a\") " pod="openshift-marketplace/redhat-marketplace-56477" Feb 02 11:43:27 crc kubenswrapper[4925]: I0202 11:43:27.110664 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-utilities\") pod \"redhat-marketplace-56477\" (UID: \"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a\") " pod="openshift-marketplace/redhat-marketplace-56477" Feb 02 11:43:27 crc kubenswrapper[4925]: I0202 11:43:27.111799 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-utilities\") pod \"redhat-marketplace-56477\" (UID: \"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a\") " pod="openshift-marketplace/redhat-marketplace-56477" Feb 02 11:43:27 crc kubenswrapper[4925]: I0202 11:43:27.113650 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-catalog-content\") pod \"redhat-marketplace-56477\" (UID: \"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a\") " pod="openshift-marketplace/redhat-marketplace-56477" Feb 02 11:43:27 crc kubenswrapper[4925]: I0202 11:43:27.143839 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqccp\" (UniqueName: \"kubernetes.io/projected/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-kube-api-access-kqccp\") pod \"redhat-marketplace-56477\" (UID: \"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a\") " pod="openshift-marketplace/redhat-marketplace-56477" Feb 02 11:43:27 crc kubenswrapper[4925]: I0202 11:43:27.170388 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56477" Feb 02 11:43:27 crc kubenswrapper[4925]: I0202 11:43:27.667784 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-56477"] Feb 02 11:43:28 crc kubenswrapper[4925]: I0202 11:43:28.289544 4925 generic.go:334] "Generic (PLEG): container finished" podID="88289df5-1f0c-4bd1-9dea-83ae5f8ead1a" containerID="dcdbee78824155451e53b54c5b096b427724891028330630fe31c9ab84356c75" exitCode=0 Feb 02 11:43:28 crc kubenswrapper[4925]: I0202 11:43:28.289584 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56477" event={"ID":"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a","Type":"ContainerDied","Data":"dcdbee78824155451e53b54c5b096b427724891028330630fe31c9ab84356c75"} Feb 02 11:43:28 crc kubenswrapper[4925]: I0202 11:43:28.290846 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56477" event={"ID":"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a","Type":"ContainerStarted","Data":"e2a597a3632dc8d8735f375bec492156cb9d998dada90437fcba1e2bcdf65080"} Feb 02 11:43:30 crc kubenswrapper[4925]: I0202 11:43:30.309880 4925 generic.go:334] "Generic (PLEG): container finished" podID="88289df5-1f0c-4bd1-9dea-83ae5f8ead1a" containerID="032099d358dec087f659ea47d5a1556bc8492e7b36371786b93c3341c0a12b72" exitCode=0 Feb 02 11:43:30 crc kubenswrapper[4925]: I0202 11:43:30.309973 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56477" event={"ID":"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a","Type":"ContainerDied","Data":"032099d358dec087f659ea47d5a1556bc8492e7b36371786b93c3341c0a12b72"} Feb 02 11:43:31 crc kubenswrapper[4925]: I0202 11:43:31.320510 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56477" event={"ID":"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a","Type":"ContainerStarted","Data":"41894a2a5d1e84acb113491ca894ce0efee1337530f167b44c725f04a4a4c0a9"} Feb 02 11:43:31 crc kubenswrapper[4925]: I0202 11:43:31.343629 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-56477" podStartSLOduration=2.907916059 podStartE2EDuration="5.343605228s" podCreationTimestamp="2026-02-02 11:43:26 +0000 UTC" firstStartedPulling="2026-02-02 11:43:28.291515376 +0000 UTC m=+2785.295764338" lastFinishedPulling="2026-02-02 11:43:30.727204545 +0000 UTC m=+2787.731453507" observedRunningTime="2026-02-02 11:43:31.340941155 +0000 UTC m=+2788.345190117" watchObservedRunningTime="2026-02-02 11:43:31.343605228 +0000 UTC m=+2788.347854190" Feb 02 11:43:37 crc kubenswrapper[4925]: I0202 11:43:37.171682 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-56477" Feb 02 11:43:37 crc kubenswrapper[4925]: I0202 11:43:37.172536 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-56477" Feb 02 11:43:37 crc kubenswrapper[4925]: I0202 11:43:37.255771 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-56477" Feb 02 11:43:37 crc kubenswrapper[4925]: I0202 11:43:37.410946 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-56477" Feb 02 11:43:37 crc kubenswrapper[4925]: I0202 11:43:37.495953 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-56477"] Feb 02 11:43:39 crc kubenswrapper[4925]: I0202 11:43:39.387175 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-56477" podUID="88289df5-1f0c-4bd1-9dea-83ae5f8ead1a" containerName="registry-server" containerID="cri-o://41894a2a5d1e84acb113491ca894ce0efee1337530f167b44c725f04a4a4c0a9" gracePeriod=2 Feb 02 11:43:39 crc kubenswrapper[4925]: I0202 11:43:39.850117 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56477" Feb 02 11:43:39 crc kubenswrapper[4925]: I0202 11:43:39.981172 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-utilities\") pod \"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a\" (UID: \"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a\") " Feb 02 11:43:39 crc kubenswrapper[4925]: I0202 11:43:39.981366 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-catalog-content\") pod \"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a\" (UID: \"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a\") " Feb 02 11:43:39 crc kubenswrapper[4925]: I0202 11:43:39.981403 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqccp\" (UniqueName: \"kubernetes.io/projected/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-kube-api-access-kqccp\") pod \"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a\" (UID: \"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a\") " Feb 02 11:43:39 crc kubenswrapper[4925]: I0202 11:43:39.982463 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-utilities" (OuterVolumeSpecName: "utilities") pod "88289df5-1f0c-4bd1-9dea-83ae5f8ead1a" (UID: "88289df5-1f0c-4bd1-9dea-83ae5f8ead1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:43:39 crc kubenswrapper[4925]: I0202 11:43:39.987788 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-kube-api-access-kqccp" (OuterVolumeSpecName: "kube-api-access-kqccp") pod "88289df5-1f0c-4bd1-9dea-83ae5f8ead1a" (UID: "88289df5-1f0c-4bd1-9dea-83ae5f8ead1a"). InnerVolumeSpecName "kube-api-access-kqccp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.005959 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88289df5-1f0c-4bd1-9dea-83ae5f8ead1a" (UID: "88289df5-1f0c-4bd1-9dea-83ae5f8ead1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.083045 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.083087 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.083101 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqccp\" (UniqueName: \"kubernetes.io/projected/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a-kube-api-access-kqccp\") on node \"crc\" DevicePath \"\"" Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.402930 4925 generic.go:334] "Generic (PLEG): container finished" podID="88289df5-1f0c-4bd1-9dea-83ae5f8ead1a" containerID="41894a2a5d1e84acb113491ca894ce0efee1337530f167b44c725f04a4a4c0a9" exitCode=0 Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.402975 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56477" event={"ID":"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a","Type":"ContainerDied","Data":"41894a2a5d1e84acb113491ca894ce0efee1337530f167b44c725f04a4a4c0a9"} Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.403000 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56477" event={"ID":"88289df5-1f0c-4bd1-9dea-83ae5f8ead1a","Type":"ContainerDied","Data":"e2a597a3632dc8d8735f375bec492156cb9d998dada90437fcba1e2bcdf65080"} Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.403018 4925 scope.go:117] "RemoveContainer" containerID="41894a2a5d1e84acb113491ca894ce0efee1337530f167b44c725f04a4a4c0a9" Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.403169 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56477" Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.438099 4925 scope.go:117] "RemoveContainer" containerID="032099d358dec087f659ea47d5a1556bc8492e7b36371786b93c3341c0a12b72" Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.467344 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-56477"] Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.475084 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-56477"] Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.484877 4925 scope.go:117] "RemoveContainer" containerID="dcdbee78824155451e53b54c5b096b427724891028330630fe31c9ab84356c75" Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.518131 4925 scope.go:117] "RemoveContainer" containerID="41894a2a5d1e84acb113491ca894ce0efee1337530f167b44c725f04a4a4c0a9" Feb 02 11:43:40 crc kubenswrapper[4925]: E0202 11:43:40.518624 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41894a2a5d1e84acb113491ca894ce0efee1337530f167b44c725f04a4a4c0a9\": container with ID starting with 41894a2a5d1e84acb113491ca894ce0efee1337530f167b44c725f04a4a4c0a9 not found: ID does not exist" containerID="41894a2a5d1e84acb113491ca894ce0efee1337530f167b44c725f04a4a4c0a9" Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.518675 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41894a2a5d1e84acb113491ca894ce0efee1337530f167b44c725f04a4a4c0a9"} err="failed to get container status \"41894a2a5d1e84acb113491ca894ce0efee1337530f167b44c725f04a4a4c0a9\": rpc error: code = NotFound desc = could not find container \"41894a2a5d1e84acb113491ca894ce0efee1337530f167b44c725f04a4a4c0a9\": container with ID starting with 41894a2a5d1e84acb113491ca894ce0efee1337530f167b44c725f04a4a4c0a9 not found: ID does not exist" Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.518705 4925 scope.go:117] "RemoveContainer" containerID="032099d358dec087f659ea47d5a1556bc8492e7b36371786b93c3341c0a12b72" Feb 02 11:43:40 crc kubenswrapper[4925]: E0202 11:43:40.519428 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"032099d358dec087f659ea47d5a1556bc8492e7b36371786b93c3341c0a12b72\": container with ID starting with 032099d358dec087f659ea47d5a1556bc8492e7b36371786b93c3341c0a12b72 not found: ID does not exist" containerID="032099d358dec087f659ea47d5a1556bc8492e7b36371786b93c3341c0a12b72" Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.519473 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"032099d358dec087f659ea47d5a1556bc8492e7b36371786b93c3341c0a12b72"} err="failed to get container status \"032099d358dec087f659ea47d5a1556bc8492e7b36371786b93c3341c0a12b72\": rpc error: code = NotFound desc = could not find container \"032099d358dec087f659ea47d5a1556bc8492e7b36371786b93c3341c0a12b72\": container with ID starting with 032099d358dec087f659ea47d5a1556bc8492e7b36371786b93c3341c0a12b72 not found: ID does not exist" Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.519508 4925 scope.go:117] "RemoveContainer" containerID="dcdbee78824155451e53b54c5b096b427724891028330630fe31c9ab84356c75" Feb 02 11:43:40 crc kubenswrapper[4925]: E0202 11:43:40.519973 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcdbee78824155451e53b54c5b096b427724891028330630fe31c9ab84356c75\": container with ID starting with dcdbee78824155451e53b54c5b096b427724891028330630fe31c9ab84356c75 not found: ID does not exist" containerID="dcdbee78824155451e53b54c5b096b427724891028330630fe31c9ab84356c75" Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.520013 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcdbee78824155451e53b54c5b096b427724891028330630fe31c9ab84356c75"} err="failed to get container status \"dcdbee78824155451e53b54c5b096b427724891028330630fe31c9ab84356c75\": rpc error: code = NotFound desc = could not find container \"dcdbee78824155451e53b54c5b096b427724891028330630fe31c9ab84356c75\": container with ID starting with dcdbee78824155451e53b54c5b096b427724891028330630fe31c9ab84356c75 not found: ID does not exist" Feb 02 11:43:40 crc kubenswrapper[4925]: I0202 11:43:40.674093 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88289df5-1f0c-4bd1-9dea-83ae5f8ead1a" path="/var/lib/kubelet/pods/88289df5-1f0c-4bd1-9dea-83ae5f8ead1a/volumes" Feb 02 11:43:43 crc kubenswrapper[4925]: I0202 11:43:43.398441 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:43:43 crc kubenswrapper[4925]: I0202 11:43:43.398989 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:44:13 crc kubenswrapper[4925]: I0202 11:44:13.399162 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:44:13 crc kubenswrapper[4925]: I0202 11:44:13.399665 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:44:43 crc kubenswrapper[4925]: I0202 11:44:43.398287 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:44:43 crc kubenswrapper[4925]: I0202 11:44:43.398906 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:44:43 crc kubenswrapper[4925]: I0202 11:44:43.399004 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 11:44:43 crc kubenswrapper[4925]: I0202 11:44:43.399855 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40cf3ac2a0ac9b7206f6541b854f2b61cc2451fe97ce5ef5864cb5666fd27668"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:44:43 crc kubenswrapper[4925]: I0202 11:44:43.399912 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://40cf3ac2a0ac9b7206f6541b854f2b61cc2451fe97ce5ef5864cb5666fd27668" gracePeriod=600 Feb 02 11:44:43 crc kubenswrapper[4925]: I0202 11:44:43.945155 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="40cf3ac2a0ac9b7206f6541b854f2b61cc2451fe97ce5ef5864cb5666fd27668" exitCode=0 Feb 02 11:44:43 crc kubenswrapper[4925]: I0202 11:44:43.945210 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"40cf3ac2a0ac9b7206f6541b854f2b61cc2451fe97ce5ef5864cb5666fd27668"} Feb 02 11:44:43 crc kubenswrapper[4925]: I0202 11:44:43.945512 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44"} Feb 02 11:44:43 crc kubenswrapper[4925]: I0202 11:44:43.945534 4925 scope.go:117] "RemoveContainer" containerID="26d1ff1a7ec24fa85892d5744c7b502506af350c30e4e1cd572ad55a6caacb42" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.155474 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq"] Feb 02 11:45:00 crc kubenswrapper[4925]: E0202 11:45:00.159331 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88289df5-1f0c-4bd1-9dea-83ae5f8ead1a" containerName="registry-server" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.159798 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="88289df5-1f0c-4bd1-9dea-83ae5f8ead1a" containerName="registry-server" Feb 02 11:45:00 crc kubenswrapper[4925]: E0202 11:45:00.159931 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88289df5-1f0c-4bd1-9dea-83ae5f8ead1a" containerName="extract-content" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.160031 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="88289df5-1f0c-4bd1-9dea-83ae5f8ead1a" containerName="extract-content" Feb 02 11:45:00 crc kubenswrapper[4925]: E0202 11:45:00.160275 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88289df5-1f0c-4bd1-9dea-83ae5f8ead1a" containerName="extract-utilities" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.160375 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="88289df5-1f0c-4bd1-9dea-83ae5f8ead1a" containerName="extract-utilities" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.160800 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="88289df5-1f0c-4bd1-9dea-83ae5f8ead1a" containerName="registry-server" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.161849 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.164719 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.168241 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.173481 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq"] Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.286492 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbdj2\" (UniqueName: \"kubernetes.io/projected/2d283885-9df9-497f-ab4c-3faf24639605-kube-api-access-qbdj2\") pod \"collect-profiles-29500545-g6zmq\" (UID: \"2d283885-9df9-497f-ab4c-3faf24639605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.286610 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d283885-9df9-497f-ab4c-3faf24639605-secret-volume\") pod \"collect-profiles-29500545-g6zmq\" (UID: \"2d283885-9df9-497f-ab4c-3faf24639605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.287558 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d283885-9df9-497f-ab4c-3faf24639605-config-volume\") pod \"collect-profiles-29500545-g6zmq\" (UID: \"2d283885-9df9-497f-ab4c-3faf24639605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.389933 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbdj2\" (UniqueName: \"kubernetes.io/projected/2d283885-9df9-497f-ab4c-3faf24639605-kube-api-access-qbdj2\") pod \"collect-profiles-29500545-g6zmq\" (UID: \"2d283885-9df9-497f-ab4c-3faf24639605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.390403 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d283885-9df9-497f-ab4c-3faf24639605-secret-volume\") pod \"collect-profiles-29500545-g6zmq\" (UID: \"2d283885-9df9-497f-ab4c-3faf24639605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.390548 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d283885-9df9-497f-ab4c-3faf24639605-config-volume\") pod \"collect-profiles-29500545-g6zmq\" (UID: \"2d283885-9df9-497f-ab4c-3faf24639605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.391922 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d283885-9df9-497f-ab4c-3faf24639605-config-volume\") pod \"collect-profiles-29500545-g6zmq\" (UID: \"2d283885-9df9-497f-ab4c-3faf24639605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.399223 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d283885-9df9-497f-ab4c-3faf24639605-secret-volume\") pod \"collect-profiles-29500545-g6zmq\" (UID: \"2d283885-9df9-497f-ab4c-3faf24639605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.423852 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbdj2\" (UniqueName: \"kubernetes.io/projected/2d283885-9df9-497f-ab4c-3faf24639605-kube-api-access-qbdj2\") pod \"collect-profiles-29500545-g6zmq\" (UID: \"2d283885-9df9-497f-ab4c-3faf24639605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.494412 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq" Feb 02 11:45:00 crc kubenswrapper[4925]: I0202 11:45:00.952948 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq"] Feb 02 11:45:01 crc kubenswrapper[4925]: I0202 11:45:01.107839 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq" event={"ID":"2d283885-9df9-497f-ab4c-3faf24639605","Type":"ContainerStarted","Data":"4cb29efb68c298805134434593997b0d9cfb512463496ffc36f68dd27480f4d4"} Feb 02 11:45:02 crc kubenswrapper[4925]: I0202 11:45:02.118299 4925 generic.go:334] "Generic (PLEG): container finished" podID="2d283885-9df9-497f-ab4c-3faf24639605" containerID="243a3fc76ee4f11550e58b10089a69390b2f5d00d6a0d90a913c3a65a10fb449" exitCode=0 Feb 02 11:45:02 crc kubenswrapper[4925]: I0202 11:45:02.118517 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq" event={"ID":"2d283885-9df9-497f-ab4c-3faf24639605","Type":"ContainerDied","Data":"243a3fc76ee4f11550e58b10089a69390b2f5d00d6a0d90a913c3a65a10fb449"} Feb 02 11:45:03 crc kubenswrapper[4925]: I0202 11:45:03.434439 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq" Feb 02 11:45:03 crc kubenswrapper[4925]: I0202 11:45:03.551692 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbdj2\" (UniqueName: \"kubernetes.io/projected/2d283885-9df9-497f-ab4c-3faf24639605-kube-api-access-qbdj2\") pod \"2d283885-9df9-497f-ab4c-3faf24639605\" (UID: \"2d283885-9df9-497f-ab4c-3faf24639605\") " Feb 02 11:45:03 crc kubenswrapper[4925]: I0202 11:45:03.551829 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d283885-9df9-497f-ab4c-3faf24639605-secret-volume\") pod \"2d283885-9df9-497f-ab4c-3faf24639605\" (UID: \"2d283885-9df9-497f-ab4c-3faf24639605\") " Feb 02 11:45:03 crc kubenswrapper[4925]: I0202 11:45:03.551990 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d283885-9df9-497f-ab4c-3faf24639605-config-volume\") pod \"2d283885-9df9-497f-ab4c-3faf24639605\" (UID: \"2d283885-9df9-497f-ab4c-3faf24639605\") " Feb 02 11:45:03 crc kubenswrapper[4925]: I0202 11:45:03.552822 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d283885-9df9-497f-ab4c-3faf24639605-config-volume" (OuterVolumeSpecName: "config-volume") pod "2d283885-9df9-497f-ab4c-3faf24639605" (UID: "2d283885-9df9-497f-ab4c-3faf24639605"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:45:03 crc kubenswrapper[4925]: I0202 11:45:03.558057 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d283885-9df9-497f-ab4c-3faf24639605-kube-api-access-qbdj2" (OuterVolumeSpecName: "kube-api-access-qbdj2") pod "2d283885-9df9-497f-ab4c-3faf24639605" (UID: "2d283885-9df9-497f-ab4c-3faf24639605"). InnerVolumeSpecName "kube-api-access-qbdj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:45:03 crc kubenswrapper[4925]: I0202 11:45:03.559011 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d283885-9df9-497f-ab4c-3faf24639605-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2d283885-9df9-497f-ab4c-3faf24639605" (UID: "2d283885-9df9-497f-ab4c-3faf24639605"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:45:03 crc kubenswrapper[4925]: I0202 11:45:03.653818 4925 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d283885-9df9-497f-ab4c-3faf24639605-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:03 crc kubenswrapper[4925]: I0202 11:45:03.653849 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbdj2\" (UniqueName: \"kubernetes.io/projected/2d283885-9df9-497f-ab4c-3faf24639605-kube-api-access-qbdj2\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:03 crc kubenswrapper[4925]: I0202 11:45:03.653860 4925 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d283885-9df9-497f-ab4c-3faf24639605-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:04 crc kubenswrapper[4925]: I0202 11:45:04.136214 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq" event={"ID":"2d283885-9df9-497f-ab4c-3faf24639605","Type":"ContainerDied","Data":"4cb29efb68c298805134434593997b0d9cfb512463496ffc36f68dd27480f4d4"} Feb 02 11:45:04 crc kubenswrapper[4925]: I0202 11:45:04.136253 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cb29efb68c298805134434593997b0d9cfb512463496ffc36f68dd27480f4d4" Feb 02 11:45:04 crc kubenswrapper[4925]: I0202 11:45:04.136526 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq" Feb 02 11:45:04 crc kubenswrapper[4925]: I0202 11:45:04.505560 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw"] Feb 02 11:45:04 crc kubenswrapper[4925]: I0202 11:45:04.513714 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-696pw"] Feb 02 11:45:04 crc kubenswrapper[4925]: I0202 11:45:04.675466 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e623d6f6-1bf2-43f4-a280-147617dbf9ef" path="/var/lib/kubelet/pods/e623d6f6-1bf2-43f4-a280-147617dbf9ef/volumes" Feb 02 11:45:06 crc kubenswrapper[4925]: I0202 11:45:06.153785 4925 generic.go:334] "Generic (PLEG): container finished" podID="8e448d5d-ae77-439d-804b-eb4bea2a957d" containerID="dea740f03c4af1ed7c3a5076469080bbaa88e801fbfffb73970e6bd1d9abcfb0" exitCode=0 Feb 02 11:45:06 crc kubenswrapper[4925]: I0202 11:45:06.153823 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" event={"ID":"8e448d5d-ae77-439d-804b-eb4bea2a957d","Type":"ContainerDied","Data":"dea740f03c4af1ed7c3a5076469080bbaa88e801fbfffb73970e6bd1d9abcfb0"} Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.550383 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.674407 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-libvirt-secret-0\") pod \"8e448d5d-ae77-439d-804b-eb4bea2a957d\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.674492 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-inventory\") pod \"8e448d5d-ae77-439d-804b-eb4bea2a957d\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.674552 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvm4p\" (UniqueName: \"kubernetes.io/projected/8e448d5d-ae77-439d-804b-eb4bea2a957d-kube-api-access-gvm4p\") pod \"8e448d5d-ae77-439d-804b-eb4bea2a957d\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.674590 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-ssh-key-openstack-edpm-ipam\") pod \"8e448d5d-ae77-439d-804b-eb4bea2a957d\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.674625 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-ceph\") pod \"8e448d5d-ae77-439d-804b-eb4bea2a957d\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.674684 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-libvirt-combined-ca-bundle\") pod \"8e448d5d-ae77-439d-804b-eb4bea2a957d\" (UID: \"8e448d5d-ae77-439d-804b-eb4bea2a957d\") " Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.680238 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-ceph" (OuterVolumeSpecName: "ceph") pod "8e448d5d-ae77-439d-804b-eb4bea2a957d" (UID: "8e448d5d-ae77-439d-804b-eb4bea2a957d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.680368 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8e448d5d-ae77-439d-804b-eb4bea2a957d" (UID: "8e448d5d-ae77-439d-804b-eb4bea2a957d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.680453 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e448d5d-ae77-439d-804b-eb4bea2a957d-kube-api-access-gvm4p" (OuterVolumeSpecName: "kube-api-access-gvm4p") pod "8e448d5d-ae77-439d-804b-eb4bea2a957d" (UID: "8e448d5d-ae77-439d-804b-eb4bea2a957d"). InnerVolumeSpecName "kube-api-access-gvm4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.702642 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "8e448d5d-ae77-439d-804b-eb4bea2a957d" (UID: "8e448d5d-ae77-439d-804b-eb4bea2a957d"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.705207 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-inventory" (OuterVolumeSpecName: "inventory") pod "8e448d5d-ae77-439d-804b-eb4bea2a957d" (UID: "8e448d5d-ae77-439d-804b-eb4bea2a957d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.706268 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8e448d5d-ae77-439d-804b-eb4bea2a957d" (UID: "8e448d5d-ae77-439d-804b-eb4bea2a957d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.777138 4925 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.777199 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.777213 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvm4p\" (UniqueName: \"kubernetes.io/projected/8e448d5d-ae77-439d-804b-eb4bea2a957d-kube-api-access-gvm4p\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.777228 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.777239 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:07 crc kubenswrapper[4925]: I0202 11:45:07.777250 4925 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e448d5d-ae77-439d-804b-eb4bea2a957d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.172943 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" event={"ID":"8e448d5d-ae77-439d-804b-eb4bea2a957d","Type":"ContainerDied","Data":"5d1e3f8f28c14b0c5f32335d5f0ddb061f47f8821fd56e13f1d3e90dd016914c"} Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.173339 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d1e3f8f28c14b0c5f32335d5f0ddb061f47f8821fd56e13f1d3e90dd016914c" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.173108 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b5rct" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.253464 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd"] Feb 02 11:45:08 crc kubenswrapper[4925]: E0202 11:45:08.253816 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e448d5d-ae77-439d-804b-eb4bea2a957d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.253835 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e448d5d-ae77-439d-804b-eb4bea2a957d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 11:45:08 crc kubenswrapper[4925]: E0202 11:45:08.253855 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d283885-9df9-497f-ab4c-3faf24639605" containerName="collect-profiles" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.253862 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d283885-9df9-497f-ab4c-3faf24639605" containerName="collect-profiles" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.254021 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e448d5d-ae77-439d-804b-eb4bea2a957d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.254036 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d283885-9df9-497f-ab4c-3faf24639605" containerName="collect-profiles" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.254606 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.256818 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dcpnz" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.257189 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.257609 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.258009 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.258300 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.258545 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.258841 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.259473 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.261187 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.273545 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd"] Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.389735 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.390114 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.390225 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.390328 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.390446 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/374c9a22-b870-43ee-a27a-499a0d607e32-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.390550 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.390646 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.390838 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k6lq\" (UniqueName: \"kubernetes.io/projected/374c9a22-b870-43ee-a27a-499a0d607e32-kube-api-access-5k6lq\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.390962 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/374c9a22-b870-43ee-a27a-499a0d607e32-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.391125 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.391395 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.493419 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.493530 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.493558 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.493579 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.493598 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/374c9a22-b870-43ee-a27a-499a0d607e32-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.493624 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.493646 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.493670 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k6lq\" (UniqueName: \"kubernetes.io/projected/374c9a22-b870-43ee-a27a-499a0d607e32-kube-api-access-5k6lq\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.493700 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/374c9a22-b870-43ee-a27a-499a0d607e32-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.493715 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.493741 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.494785 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/374c9a22-b870-43ee-a27a-499a0d607e32-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.495067 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/374c9a22-b870-43ee-a27a-499a0d607e32-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.498386 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.498782 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.498998 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.499156 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.499639 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.499970 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.503807 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.510702 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.514193 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k6lq\" (UniqueName: \"kubernetes.io/projected/374c9a22-b870-43ee-a27a-499a0d607e32-kube-api-access-5k6lq\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:08 crc kubenswrapper[4925]: I0202 11:45:08.572609 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:45:09 crc kubenswrapper[4925]: I0202 11:45:09.112177 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd"] Feb 02 11:45:09 crc kubenswrapper[4925]: I0202 11:45:09.180460 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" event={"ID":"374c9a22-b870-43ee-a27a-499a0d607e32","Type":"ContainerStarted","Data":"e5a060bacbcaeabd9530944c4d3e765e1c896c9bcdeac93d64e0d167a8906a61"} Feb 02 11:45:10 crc kubenswrapper[4925]: I0202 11:45:10.190841 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" event={"ID":"374c9a22-b870-43ee-a27a-499a0d607e32","Type":"ContainerStarted","Data":"3a9c86d8b2b1f42a48c0d52009bd64573c01b2363f4538ecd88dc7d9c43ccedb"} Feb 02 11:45:10 crc kubenswrapper[4925]: I0202 11:45:10.215744 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" podStartSLOduration=1.75849897 podStartE2EDuration="2.215723541s" podCreationTimestamp="2026-02-02 11:45:08 +0000 UTC" firstStartedPulling="2026-02-02 11:45:09.125573909 +0000 UTC m=+2886.129822871" lastFinishedPulling="2026-02-02 11:45:09.58279848 +0000 UTC m=+2886.587047442" observedRunningTime="2026-02-02 11:45:10.207904299 +0000 UTC m=+2887.212153261" watchObservedRunningTime="2026-02-02 11:45:10.215723541 +0000 UTC m=+2887.219972493" Feb 02 11:45:29 crc kubenswrapper[4925]: I0202 11:45:29.626980 4925 scope.go:117] "RemoveContainer" containerID="701751bde1c852f488d42123640b2dfda58d005c19140f23a8612c24d1153520" Feb 02 11:46:43 crc kubenswrapper[4925]: I0202 11:46:43.398354 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:46:43 crc kubenswrapper[4925]: I0202 11:46:43.398763 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:46:52 crc kubenswrapper[4925]: I0202 11:46:52.526661 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r67fl"] Feb 02 11:46:52 crc kubenswrapper[4925]: I0202 11:46:52.529869 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r67fl" Feb 02 11:46:52 crc kubenswrapper[4925]: I0202 11:46:52.551568 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r67fl"] Feb 02 11:46:52 crc kubenswrapper[4925]: I0202 11:46:52.563503 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9zd4\" (UniqueName: \"kubernetes.io/projected/d4a1e446-9614-4649-a096-c9aae8857d33-kube-api-access-m9zd4\") pod \"certified-operators-r67fl\" (UID: \"d4a1e446-9614-4649-a096-c9aae8857d33\") " pod="openshift-marketplace/certified-operators-r67fl" Feb 02 11:46:52 crc kubenswrapper[4925]: I0202 11:46:52.563566 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a1e446-9614-4649-a096-c9aae8857d33-utilities\") pod \"certified-operators-r67fl\" (UID: \"d4a1e446-9614-4649-a096-c9aae8857d33\") " pod="openshift-marketplace/certified-operators-r67fl" Feb 02 11:46:52 crc kubenswrapper[4925]: I0202 11:46:52.563743 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a1e446-9614-4649-a096-c9aae8857d33-catalog-content\") pod \"certified-operators-r67fl\" (UID: \"d4a1e446-9614-4649-a096-c9aae8857d33\") " pod="openshift-marketplace/certified-operators-r67fl" Feb 02 11:46:52 crc kubenswrapper[4925]: I0202 11:46:52.665685 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a1e446-9614-4649-a096-c9aae8857d33-catalog-content\") pod \"certified-operators-r67fl\" (UID: \"d4a1e446-9614-4649-a096-c9aae8857d33\") " pod="openshift-marketplace/certified-operators-r67fl" Feb 02 11:46:52 crc kubenswrapper[4925]: I0202 11:46:52.665766 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9zd4\" (UniqueName: \"kubernetes.io/projected/d4a1e446-9614-4649-a096-c9aae8857d33-kube-api-access-m9zd4\") pod \"certified-operators-r67fl\" (UID: \"d4a1e446-9614-4649-a096-c9aae8857d33\") " pod="openshift-marketplace/certified-operators-r67fl" Feb 02 11:46:52 crc kubenswrapper[4925]: I0202 11:46:52.665799 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a1e446-9614-4649-a096-c9aae8857d33-utilities\") pod \"certified-operators-r67fl\" (UID: \"d4a1e446-9614-4649-a096-c9aae8857d33\") " pod="openshift-marketplace/certified-operators-r67fl" Feb 02 11:46:52 crc kubenswrapper[4925]: I0202 11:46:52.666197 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a1e446-9614-4649-a096-c9aae8857d33-catalog-content\") pod \"certified-operators-r67fl\" (UID: \"d4a1e446-9614-4649-a096-c9aae8857d33\") " pod="openshift-marketplace/certified-operators-r67fl" Feb 02 11:46:52 crc kubenswrapper[4925]: I0202 11:46:52.666293 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a1e446-9614-4649-a096-c9aae8857d33-utilities\") pod \"certified-operators-r67fl\" (UID: \"d4a1e446-9614-4649-a096-c9aae8857d33\") " pod="openshift-marketplace/certified-operators-r67fl" Feb 02 11:46:52 crc kubenswrapper[4925]: I0202 11:46:52.689737 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9zd4\" (UniqueName: \"kubernetes.io/projected/d4a1e446-9614-4649-a096-c9aae8857d33-kube-api-access-m9zd4\") pod \"certified-operators-r67fl\" (UID: \"d4a1e446-9614-4649-a096-c9aae8857d33\") " pod="openshift-marketplace/certified-operators-r67fl" Feb 02 11:46:52 crc kubenswrapper[4925]: I0202 11:46:52.847842 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r67fl" Feb 02 11:46:53 crc kubenswrapper[4925]: I0202 11:46:53.303553 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r67fl"] Feb 02 11:46:54 crc kubenswrapper[4925]: I0202 11:46:54.027141 4925 generic.go:334] "Generic (PLEG): container finished" podID="d4a1e446-9614-4649-a096-c9aae8857d33" containerID="ff256593496f5bf3d28fb7882a6248ef7cc2727857c18a324369c8b481a3a910" exitCode=0 Feb 02 11:46:54 crc kubenswrapper[4925]: I0202 11:46:54.027210 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r67fl" event={"ID":"d4a1e446-9614-4649-a096-c9aae8857d33","Type":"ContainerDied","Data":"ff256593496f5bf3d28fb7882a6248ef7cc2727857c18a324369c8b481a3a910"} Feb 02 11:46:54 crc kubenswrapper[4925]: I0202 11:46:54.027463 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r67fl" event={"ID":"d4a1e446-9614-4649-a096-c9aae8857d33","Type":"ContainerStarted","Data":"d56d6e969fa8e42982cec44b247f9191ca71942d4f8e929e492e1dc93122c970"} Feb 02 11:46:54 crc kubenswrapper[4925]: I0202 11:46:54.029953 4925 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:46:55 crc kubenswrapper[4925]: I0202 11:46:55.044271 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r67fl" event={"ID":"d4a1e446-9614-4649-a096-c9aae8857d33","Type":"ContainerStarted","Data":"68d558322583fd761870f4c464f898ea62160d4421a69d80a1153f3758737bd9"} Feb 02 11:46:56 crc kubenswrapper[4925]: I0202 11:46:56.056810 4925 generic.go:334] "Generic (PLEG): container finished" podID="d4a1e446-9614-4649-a096-c9aae8857d33" containerID="68d558322583fd761870f4c464f898ea62160d4421a69d80a1153f3758737bd9" exitCode=0 Feb 02 11:46:56 crc kubenswrapper[4925]: I0202 11:46:56.056889 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r67fl" event={"ID":"d4a1e446-9614-4649-a096-c9aae8857d33","Type":"ContainerDied","Data":"68d558322583fd761870f4c464f898ea62160d4421a69d80a1153f3758737bd9"} Feb 02 11:46:57 crc kubenswrapper[4925]: I0202 11:46:57.068127 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r67fl" event={"ID":"d4a1e446-9614-4649-a096-c9aae8857d33","Type":"ContainerStarted","Data":"a1317e12dee5cdf97ac1e1d5f333edaf18ffef951e2853e0da4e56b4e759ac5a"} Feb 02 11:46:57 crc kubenswrapper[4925]: I0202 11:46:57.086372 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r67fl" podStartSLOduration=2.665788731 podStartE2EDuration="5.086354721s" podCreationTimestamp="2026-02-02 11:46:52 +0000 UTC" firstStartedPulling="2026-02-02 11:46:54.029679924 +0000 UTC m=+2991.033928886" lastFinishedPulling="2026-02-02 11:46:56.450245914 +0000 UTC m=+2993.454494876" observedRunningTime="2026-02-02 11:46:57.084639404 +0000 UTC m=+2994.088888386" watchObservedRunningTime="2026-02-02 11:46:57.086354721 +0000 UTC m=+2994.090603683" Feb 02 11:47:02 crc kubenswrapper[4925]: I0202 11:47:02.848731 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r67fl" Feb 02 11:47:02 crc kubenswrapper[4925]: I0202 11:47:02.849414 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r67fl" Feb 02 11:47:02 crc kubenswrapper[4925]: I0202 11:47:02.901464 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r67fl" Feb 02 11:47:03 crc kubenswrapper[4925]: I0202 11:47:03.164318 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r67fl" Feb 02 11:47:03 crc kubenswrapper[4925]: I0202 11:47:03.217349 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r67fl"] Feb 02 11:47:05 crc kubenswrapper[4925]: I0202 11:47:05.132276 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r67fl" podUID="d4a1e446-9614-4649-a096-c9aae8857d33" containerName="registry-server" containerID="cri-o://a1317e12dee5cdf97ac1e1d5f333edaf18ffef951e2853e0da4e56b4e759ac5a" gracePeriod=2 Feb 02 11:47:06 crc kubenswrapper[4925]: I0202 11:47:06.144977 4925 generic.go:334] "Generic (PLEG): container finished" podID="d4a1e446-9614-4649-a096-c9aae8857d33" containerID="a1317e12dee5cdf97ac1e1d5f333edaf18ffef951e2853e0da4e56b4e759ac5a" exitCode=0 Feb 02 11:47:06 crc kubenswrapper[4925]: I0202 11:47:06.145084 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r67fl" event={"ID":"d4a1e446-9614-4649-a096-c9aae8857d33","Type":"ContainerDied","Data":"a1317e12dee5cdf97ac1e1d5f333edaf18ffef951e2853e0da4e56b4e759ac5a"} Feb 02 11:47:06 crc kubenswrapper[4925]: I0202 11:47:06.730141 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r67fl" Feb 02 11:47:06 crc kubenswrapper[4925]: I0202 11:47:06.750566 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a1e446-9614-4649-a096-c9aae8857d33-utilities\") pod \"d4a1e446-9614-4649-a096-c9aae8857d33\" (UID: \"d4a1e446-9614-4649-a096-c9aae8857d33\") " Feb 02 11:47:06 crc kubenswrapper[4925]: I0202 11:47:06.750930 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9zd4\" (UniqueName: \"kubernetes.io/projected/d4a1e446-9614-4649-a096-c9aae8857d33-kube-api-access-m9zd4\") pod \"d4a1e446-9614-4649-a096-c9aae8857d33\" (UID: \"d4a1e446-9614-4649-a096-c9aae8857d33\") " Feb 02 11:47:06 crc kubenswrapper[4925]: I0202 11:47:06.750986 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a1e446-9614-4649-a096-c9aae8857d33-catalog-content\") pod \"d4a1e446-9614-4649-a096-c9aae8857d33\" (UID: \"d4a1e446-9614-4649-a096-c9aae8857d33\") " Feb 02 11:47:06 crc kubenswrapper[4925]: I0202 11:47:06.756499 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4a1e446-9614-4649-a096-c9aae8857d33-utilities" (OuterVolumeSpecName: "utilities") pod "d4a1e446-9614-4649-a096-c9aae8857d33" (UID: "d4a1e446-9614-4649-a096-c9aae8857d33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:47:06 crc kubenswrapper[4925]: I0202 11:47:06.760500 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4a1e446-9614-4649-a096-c9aae8857d33-kube-api-access-m9zd4" (OuterVolumeSpecName: "kube-api-access-m9zd4") pod "d4a1e446-9614-4649-a096-c9aae8857d33" (UID: "d4a1e446-9614-4649-a096-c9aae8857d33"). InnerVolumeSpecName "kube-api-access-m9zd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:06 crc kubenswrapper[4925]: I0202 11:47:06.817472 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4a1e446-9614-4649-a096-c9aae8857d33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4a1e446-9614-4649-a096-c9aae8857d33" (UID: "d4a1e446-9614-4649-a096-c9aae8857d33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:47:06 crc kubenswrapper[4925]: I0202 11:47:06.854663 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9zd4\" (UniqueName: \"kubernetes.io/projected/d4a1e446-9614-4649-a096-c9aae8857d33-kube-api-access-m9zd4\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:06 crc kubenswrapper[4925]: I0202 11:47:06.854737 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a1e446-9614-4649-a096-c9aae8857d33-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:06 crc kubenswrapper[4925]: I0202 11:47:06.854757 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a1e446-9614-4649-a096-c9aae8857d33-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:07 crc kubenswrapper[4925]: I0202 11:47:07.159481 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r67fl" event={"ID":"d4a1e446-9614-4649-a096-c9aae8857d33","Type":"ContainerDied","Data":"d56d6e969fa8e42982cec44b247f9191ca71942d4f8e929e492e1dc93122c970"} Feb 02 11:47:07 crc kubenswrapper[4925]: I0202 11:47:07.159569 4925 scope.go:117] "RemoveContainer" containerID="a1317e12dee5cdf97ac1e1d5f333edaf18ffef951e2853e0da4e56b4e759ac5a" Feb 02 11:47:07 crc kubenswrapper[4925]: I0202 11:47:07.159766 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r67fl" Feb 02 11:47:07 crc kubenswrapper[4925]: I0202 11:47:07.199028 4925 scope.go:117] "RemoveContainer" containerID="68d558322583fd761870f4c464f898ea62160d4421a69d80a1153f3758737bd9" Feb 02 11:47:07 crc kubenswrapper[4925]: I0202 11:47:07.204348 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r67fl"] Feb 02 11:47:07 crc kubenswrapper[4925]: I0202 11:47:07.212196 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r67fl"] Feb 02 11:47:07 crc kubenswrapper[4925]: I0202 11:47:07.230523 4925 scope.go:117] "RemoveContainer" containerID="ff256593496f5bf3d28fb7882a6248ef7cc2727857c18a324369c8b481a3a910" Feb 02 11:47:08 crc kubenswrapper[4925]: I0202 11:47:08.675057 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4a1e446-9614-4649-a096-c9aae8857d33" path="/var/lib/kubelet/pods/d4a1e446-9614-4649-a096-c9aae8857d33/volumes" Feb 02 11:47:13 crc kubenswrapper[4925]: I0202 11:47:13.398611 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:47:13 crc kubenswrapper[4925]: I0202 11:47:13.399243 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:47:17 crc kubenswrapper[4925]: I0202 11:47:17.425842 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-grjxw"] Feb 02 11:47:17 crc kubenswrapper[4925]: E0202 11:47:17.426965 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a1e446-9614-4649-a096-c9aae8857d33" containerName="registry-server" Feb 02 11:47:17 crc kubenswrapper[4925]: I0202 11:47:17.426985 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a1e446-9614-4649-a096-c9aae8857d33" containerName="registry-server" Feb 02 11:47:17 crc kubenswrapper[4925]: E0202 11:47:17.427001 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a1e446-9614-4649-a096-c9aae8857d33" containerName="extract-content" Feb 02 11:47:17 crc kubenswrapper[4925]: I0202 11:47:17.427008 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a1e446-9614-4649-a096-c9aae8857d33" containerName="extract-content" Feb 02 11:47:17 crc kubenswrapper[4925]: E0202 11:47:17.427027 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a1e446-9614-4649-a096-c9aae8857d33" containerName="extract-utilities" Feb 02 11:47:17 crc kubenswrapper[4925]: I0202 11:47:17.427036 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a1e446-9614-4649-a096-c9aae8857d33" containerName="extract-utilities" Feb 02 11:47:17 crc kubenswrapper[4925]: I0202 11:47:17.427306 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a1e446-9614-4649-a096-c9aae8857d33" containerName="registry-server" Feb 02 11:47:17 crc kubenswrapper[4925]: I0202 11:47:17.429176 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-grjxw" Feb 02 11:47:17 crc kubenswrapper[4925]: I0202 11:47:17.446220 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-grjxw"] Feb 02 11:47:17 crc kubenswrapper[4925]: I0202 11:47:17.461998 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/002e0bf5-06f8-4f06-adc7-76e228f27480-utilities\") pod \"community-operators-grjxw\" (UID: \"002e0bf5-06f8-4f06-adc7-76e228f27480\") " pod="openshift-marketplace/community-operators-grjxw" Feb 02 11:47:17 crc kubenswrapper[4925]: I0202 11:47:17.462047 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cszbr\" (UniqueName: \"kubernetes.io/projected/002e0bf5-06f8-4f06-adc7-76e228f27480-kube-api-access-cszbr\") pod \"community-operators-grjxw\" (UID: \"002e0bf5-06f8-4f06-adc7-76e228f27480\") " pod="openshift-marketplace/community-operators-grjxw" Feb 02 11:47:17 crc kubenswrapper[4925]: I0202 11:47:17.462086 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/002e0bf5-06f8-4f06-adc7-76e228f27480-catalog-content\") pod \"community-operators-grjxw\" (UID: \"002e0bf5-06f8-4f06-adc7-76e228f27480\") " pod="openshift-marketplace/community-operators-grjxw" Feb 02 11:47:17 crc kubenswrapper[4925]: I0202 11:47:17.563755 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/002e0bf5-06f8-4f06-adc7-76e228f27480-utilities\") pod \"community-operators-grjxw\" (UID: \"002e0bf5-06f8-4f06-adc7-76e228f27480\") " pod="openshift-marketplace/community-operators-grjxw" Feb 02 11:47:17 crc kubenswrapper[4925]: I0202 11:47:17.563805 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cszbr\" (UniqueName: \"kubernetes.io/projected/002e0bf5-06f8-4f06-adc7-76e228f27480-kube-api-access-cszbr\") pod \"community-operators-grjxw\" (UID: \"002e0bf5-06f8-4f06-adc7-76e228f27480\") " pod="openshift-marketplace/community-operators-grjxw" Feb 02 11:47:17 crc kubenswrapper[4925]: I0202 11:47:17.563830 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/002e0bf5-06f8-4f06-adc7-76e228f27480-catalog-content\") pod \"community-operators-grjxw\" (UID: \"002e0bf5-06f8-4f06-adc7-76e228f27480\") " pod="openshift-marketplace/community-operators-grjxw" Feb 02 11:47:17 crc kubenswrapper[4925]: I0202 11:47:17.564417 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/002e0bf5-06f8-4f06-adc7-76e228f27480-catalog-content\") pod \"community-operators-grjxw\" (UID: \"002e0bf5-06f8-4f06-adc7-76e228f27480\") " pod="openshift-marketplace/community-operators-grjxw" Feb 02 11:47:17 crc kubenswrapper[4925]: I0202 11:47:17.564567 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/002e0bf5-06f8-4f06-adc7-76e228f27480-utilities\") pod \"community-operators-grjxw\" (UID: \"002e0bf5-06f8-4f06-adc7-76e228f27480\") " pod="openshift-marketplace/community-operators-grjxw" Feb 02 11:47:17 crc kubenswrapper[4925]: I0202 11:47:17.585801 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cszbr\" (UniqueName: \"kubernetes.io/projected/002e0bf5-06f8-4f06-adc7-76e228f27480-kube-api-access-cszbr\") pod \"community-operators-grjxw\" (UID: \"002e0bf5-06f8-4f06-adc7-76e228f27480\") " pod="openshift-marketplace/community-operators-grjxw" Feb 02 11:47:17 crc kubenswrapper[4925]: I0202 11:47:17.763637 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-grjxw" Feb 02 11:47:18 crc kubenswrapper[4925]: I0202 11:47:18.269451 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-grjxw"] Feb 02 11:47:19 crc kubenswrapper[4925]: I0202 11:47:19.251754 4925 generic.go:334] "Generic (PLEG): container finished" podID="002e0bf5-06f8-4f06-adc7-76e228f27480" containerID="e96c05fbdda4795e1099fb6ea7d7efe6e7562cf2c114b421143e68648b79f12a" exitCode=0 Feb 02 11:47:19 crc kubenswrapper[4925]: I0202 11:47:19.251822 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grjxw" event={"ID":"002e0bf5-06f8-4f06-adc7-76e228f27480","Type":"ContainerDied","Data":"e96c05fbdda4795e1099fb6ea7d7efe6e7562cf2c114b421143e68648b79f12a"} Feb 02 11:47:19 crc kubenswrapper[4925]: I0202 11:47:19.251862 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grjxw" event={"ID":"002e0bf5-06f8-4f06-adc7-76e228f27480","Type":"ContainerStarted","Data":"a34a52c4f197d1e5deabf05af59fcee59a6fcfaa1ad67653374dfec360b4b6b6"} Feb 02 11:47:20 crc kubenswrapper[4925]: I0202 11:47:20.261767 4925 generic.go:334] "Generic (PLEG): container finished" podID="374c9a22-b870-43ee-a27a-499a0d607e32" containerID="3a9c86d8b2b1f42a48c0d52009bd64573c01b2363f4538ecd88dc7d9c43ccedb" exitCode=0 Feb 02 11:47:20 crc kubenswrapper[4925]: I0202 11:47:20.261810 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" event={"ID":"374c9a22-b870-43ee-a27a-499a0d607e32","Type":"ContainerDied","Data":"3a9c86d8b2b1f42a48c0d52009bd64573c01b2363f4538ecd88dc7d9c43ccedb"} Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.273062 4925 generic.go:334] "Generic (PLEG): container finished" podID="002e0bf5-06f8-4f06-adc7-76e228f27480" containerID="e2c346b4aaa68c3d16fea4f821a34c5420061343f5ff773215479659c38f362b" exitCode=0 Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.273227 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grjxw" event={"ID":"002e0bf5-06f8-4f06-adc7-76e228f27480","Type":"ContainerDied","Data":"e2c346b4aaa68c3d16fea4f821a34c5420061343f5ff773215479659c38f362b"} Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.742329 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.846203 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/374c9a22-b870-43ee-a27a-499a0d607e32-nova-extra-config-0\") pod \"374c9a22-b870-43ee-a27a-499a0d607e32\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.846825 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/374c9a22-b870-43ee-a27a-499a0d607e32-ceph-nova-0\") pod \"374c9a22-b870-43ee-a27a-499a0d607e32\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.846880 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-ssh-key-openstack-edpm-ipam\") pod \"374c9a22-b870-43ee-a27a-499a0d607e32\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.846970 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-custom-ceph-combined-ca-bundle\") pod \"374c9a22-b870-43ee-a27a-499a0d607e32\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.847005 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-ceph\") pod \"374c9a22-b870-43ee-a27a-499a0d607e32\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.847050 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k6lq\" (UniqueName: \"kubernetes.io/projected/374c9a22-b870-43ee-a27a-499a0d607e32-kube-api-access-5k6lq\") pod \"374c9a22-b870-43ee-a27a-499a0d607e32\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.847178 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-migration-ssh-key-1\") pod \"374c9a22-b870-43ee-a27a-499a0d607e32\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.847256 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-inventory\") pod \"374c9a22-b870-43ee-a27a-499a0d607e32\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.847338 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-cell1-compute-config-1\") pod \"374c9a22-b870-43ee-a27a-499a0d607e32\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.847375 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-cell1-compute-config-0\") pod \"374c9a22-b870-43ee-a27a-499a0d607e32\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.847433 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-migration-ssh-key-0\") pod \"374c9a22-b870-43ee-a27a-499a0d607e32\" (UID: \"374c9a22-b870-43ee-a27a-499a0d607e32\") " Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.857292 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-ceph" (OuterVolumeSpecName: "ceph") pod "374c9a22-b870-43ee-a27a-499a0d607e32" (UID: "374c9a22-b870-43ee-a27a-499a0d607e32"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.857444 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "374c9a22-b870-43ee-a27a-499a0d607e32" (UID: "374c9a22-b870-43ee-a27a-499a0d607e32"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.861164 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/374c9a22-b870-43ee-a27a-499a0d607e32-kube-api-access-5k6lq" (OuterVolumeSpecName: "kube-api-access-5k6lq") pod "374c9a22-b870-43ee-a27a-499a0d607e32" (UID: "374c9a22-b870-43ee-a27a-499a0d607e32"). InnerVolumeSpecName "kube-api-access-5k6lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.891823 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "374c9a22-b870-43ee-a27a-499a0d607e32" (UID: "374c9a22-b870-43ee-a27a-499a0d607e32"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.892802 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/374c9a22-b870-43ee-a27a-499a0d607e32-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "374c9a22-b870-43ee-a27a-499a0d607e32" (UID: "374c9a22-b870-43ee-a27a-499a0d607e32"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.894345 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-inventory" (OuterVolumeSpecName: "inventory") pod "374c9a22-b870-43ee-a27a-499a0d607e32" (UID: "374c9a22-b870-43ee-a27a-499a0d607e32"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.906588 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "374c9a22-b870-43ee-a27a-499a0d607e32" (UID: "374c9a22-b870-43ee-a27a-499a0d607e32"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.908283 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/374c9a22-b870-43ee-a27a-499a0d607e32-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "374c9a22-b870-43ee-a27a-499a0d607e32" (UID: "374c9a22-b870-43ee-a27a-499a0d607e32"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.911336 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "374c9a22-b870-43ee-a27a-499a0d607e32" (UID: "374c9a22-b870-43ee-a27a-499a0d607e32"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.920955 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "374c9a22-b870-43ee-a27a-499a0d607e32" (UID: "374c9a22-b870-43ee-a27a-499a0d607e32"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.923980 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "374c9a22-b870-43ee-a27a-499a0d607e32" (UID: "374c9a22-b870-43ee-a27a-499a0d607e32"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.950410 4925 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.950471 4925 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.950487 4925 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.950501 4925 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/374c9a22-b870-43ee-a27a-499a0d607e32-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.950513 4925 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/374c9a22-b870-43ee-a27a-499a0d607e32-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.950525 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.950537 4925 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.950557 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.950570 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k6lq\" (UniqueName: \"kubernetes.io/projected/374c9a22-b870-43ee-a27a-499a0d607e32-kube-api-access-5k6lq\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.950584 4925 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:21 crc kubenswrapper[4925]: I0202 11:47:21.950595 4925 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/374c9a22-b870-43ee-a27a-499a0d607e32-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:22 crc kubenswrapper[4925]: I0202 11:47:22.286446 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grjxw" event={"ID":"002e0bf5-06f8-4f06-adc7-76e228f27480","Type":"ContainerStarted","Data":"9970c52426c20a2d23362ae62af62d459dc6ee29c4f555138951c75e5b0dd0a5"} Feb 02 11:47:22 crc kubenswrapper[4925]: I0202 11:47:22.289579 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" event={"ID":"374c9a22-b870-43ee-a27a-499a0d607e32","Type":"ContainerDied","Data":"e5a060bacbcaeabd9530944c4d3e765e1c896c9bcdeac93d64e0d167a8906a61"} Feb 02 11:47:22 crc kubenswrapper[4925]: I0202 11:47:22.289612 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5a060bacbcaeabd9530944c4d3e765e1c896c9bcdeac93d64e0d167a8906a61" Feb 02 11:47:22 crc kubenswrapper[4925]: I0202 11:47:22.289677 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd" Feb 02 11:47:22 crc kubenswrapper[4925]: I0202 11:47:22.314681 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-grjxw" podStartSLOduration=2.814716802 podStartE2EDuration="5.314640825s" podCreationTimestamp="2026-02-02 11:47:17 +0000 UTC" firstStartedPulling="2026-02-02 11:47:19.256635873 +0000 UTC m=+3016.260884835" lastFinishedPulling="2026-02-02 11:47:21.756559896 +0000 UTC m=+3018.760808858" observedRunningTime="2026-02-02 11:47:22.306394841 +0000 UTC m=+3019.310643823" watchObservedRunningTime="2026-02-02 11:47:22.314640825 +0000 UTC m=+3019.318889787" Feb 02 11:47:27 crc kubenswrapper[4925]: I0202 11:47:27.764015 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-grjxw" Feb 02 11:47:27 crc kubenswrapper[4925]: I0202 11:47:27.764361 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-grjxw" Feb 02 11:47:27 crc kubenswrapper[4925]: I0202 11:47:27.812730 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-grjxw" Feb 02 11:47:28 crc kubenswrapper[4925]: I0202 11:47:28.382402 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-grjxw" Feb 02 11:47:28 crc kubenswrapper[4925]: I0202 11:47:28.434805 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-grjxw"] Feb 02 11:47:30 crc kubenswrapper[4925]: I0202 11:47:30.354529 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-grjxw" podUID="002e0bf5-06f8-4f06-adc7-76e228f27480" containerName="registry-server" containerID="cri-o://9970c52426c20a2d23362ae62af62d459dc6ee29c4f555138951c75e5b0dd0a5" gracePeriod=2 Feb 02 11:47:30 crc kubenswrapper[4925]: I0202 11:47:30.769967 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-grjxw" Feb 02 11:47:30 crc kubenswrapper[4925]: I0202 11:47:30.918800 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/002e0bf5-06f8-4f06-adc7-76e228f27480-catalog-content\") pod \"002e0bf5-06f8-4f06-adc7-76e228f27480\" (UID: \"002e0bf5-06f8-4f06-adc7-76e228f27480\") " Feb 02 11:47:30 crc kubenswrapper[4925]: I0202 11:47:30.918885 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/002e0bf5-06f8-4f06-adc7-76e228f27480-utilities\") pod \"002e0bf5-06f8-4f06-adc7-76e228f27480\" (UID: \"002e0bf5-06f8-4f06-adc7-76e228f27480\") " Feb 02 11:47:30 crc kubenswrapper[4925]: I0202 11:47:30.919035 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cszbr\" (UniqueName: \"kubernetes.io/projected/002e0bf5-06f8-4f06-adc7-76e228f27480-kube-api-access-cszbr\") pod \"002e0bf5-06f8-4f06-adc7-76e228f27480\" (UID: \"002e0bf5-06f8-4f06-adc7-76e228f27480\") " Feb 02 11:47:30 crc kubenswrapper[4925]: I0202 11:47:30.919896 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/002e0bf5-06f8-4f06-adc7-76e228f27480-utilities" (OuterVolumeSpecName: "utilities") pod "002e0bf5-06f8-4f06-adc7-76e228f27480" (UID: "002e0bf5-06f8-4f06-adc7-76e228f27480"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:47:30 crc kubenswrapper[4925]: I0202 11:47:30.924983 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/002e0bf5-06f8-4f06-adc7-76e228f27480-kube-api-access-cszbr" (OuterVolumeSpecName: "kube-api-access-cszbr") pod "002e0bf5-06f8-4f06-adc7-76e228f27480" (UID: "002e0bf5-06f8-4f06-adc7-76e228f27480"). InnerVolumeSpecName "kube-api-access-cszbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.021296 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cszbr\" (UniqueName: \"kubernetes.io/projected/002e0bf5-06f8-4f06-adc7-76e228f27480-kube-api-access-cszbr\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.021341 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/002e0bf5-06f8-4f06-adc7-76e228f27480-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.366186 4925 generic.go:334] "Generic (PLEG): container finished" podID="002e0bf5-06f8-4f06-adc7-76e228f27480" containerID="9970c52426c20a2d23362ae62af62d459dc6ee29c4f555138951c75e5b0dd0a5" exitCode=0 Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.366243 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-grjxw" Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.366251 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grjxw" event={"ID":"002e0bf5-06f8-4f06-adc7-76e228f27480","Type":"ContainerDied","Data":"9970c52426c20a2d23362ae62af62d459dc6ee29c4f555138951c75e5b0dd0a5"} Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.366670 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grjxw" event={"ID":"002e0bf5-06f8-4f06-adc7-76e228f27480","Type":"ContainerDied","Data":"a34a52c4f197d1e5deabf05af59fcee59a6fcfaa1ad67653374dfec360b4b6b6"} Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.366701 4925 scope.go:117] "RemoveContainer" containerID="9970c52426c20a2d23362ae62af62d459dc6ee29c4f555138951c75e5b0dd0a5" Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.395936 4925 scope.go:117] "RemoveContainer" containerID="e2c346b4aaa68c3d16fea4f821a34c5420061343f5ff773215479659c38f362b" Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.420843 4925 scope.go:117] "RemoveContainer" containerID="e96c05fbdda4795e1099fb6ea7d7efe6e7562cf2c114b421143e68648b79f12a" Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.458597 4925 scope.go:117] "RemoveContainer" containerID="9970c52426c20a2d23362ae62af62d459dc6ee29c4f555138951c75e5b0dd0a5" Feb 02 11:47:31 crc kubenswrapper[4925]: E0202 11:47:31.459203 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9970c52426c20a2d23362ae62af62d459dc6ee29c4f555138951c75e5b0dd0a5\": container with ID starting with 9970c52426c20a2d23362ae62af62d459dc6ee29c4f555138951c75e5b0dd0a5 not found: ID does not exist" containerID="9970c52426c20a2d23362ae62af62d459dc6ee29c4f555138951c75e5b0dd0a5" Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.459248 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9970c52426c20a2d23362ae62af62d459dc6ee29c4f555138951c75e5b0dd0a5"} err="failed to get container status \"9970c52426c20a2d23362ae62af62d459dc6ee29c4f555138951c75e5b0dd0a5\": rpc error: code = NotFound desc = could not find container \"9970c52426c20a2d23362ae62af62d459dc6ee29c4f555138951c75e5b0dd0a5\": container with ID starting with 9970c52426c20a2d23362ae62af62d459dc6ee29c4f555138951c75e5b0dd0a5 not found: ID does not exist" Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.459273 4925 scope.go:117] "RemoveContainer" containerID="e2c346b4aaa68c3d16fea4f821a34c5420061343f5ff773215479659c38f362b" Feb 02 11:47:31 crc kubenswrapper[4925]: E0202 11:47:31.459624 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2c346b4aaa68c3d16fea4f821a34c5420061343f5ff773215479659c38f362b\": container with ID starting with e2c346b4aaa68c3d16fea4f821a34c5420061343f5ff773215479659c38f362b not found: ID does not exist" containerID="e2c346b4aaa68c3d16fea4f821a34c5420061343f5ff773215479659c38f362b" Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.459648 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c346b4aaa68c3d16fea4f821a34c5420061343f5ff773215479659c38f362b"} err="failed to get container status \"e2c346b4aaa68c3d16fea4f821a34c5420061343f5ff773215479659c38f362b\": rpc error: code = NotFound desc = could not find container \"e2c346b4aaa68c3d16fea4f821a34c5420061343f5ff773215479659c38f362b\": container with ID starting with e2c346b4aaa68c3d16fea4f821a34c5420061343f5ff773215479659c38f362b not found: ID does not exist" Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.459662 4925 scope.go:117] "RemoveContainer" containerID="e96c05fbdda4795e1099fb6ea7d7efe6e7562cf2c114b421143e68648b79f12a" Feb 02 11:47:31 crc kubenswrapper[4925]: E0202 11:47:31.459963 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e96c05fbdda4795e1099fb6ea7d7efe6e7562cf2c114b421143e68648b79f12a\": container with ID starting with e96c05fbdda4795e1099fb6ea7d7efe6e7562cf2c114b421143e68648b79f12a not found: ID does not exist" containerID="e96c05fbdda4795e1099fb6ea7d7efe6e7562cf2c114b421143e68648b79f12a" Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.459995 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96c05fbdda4795e1099fb6ea7d7efe6e7562cf2c114b421143e68648b79f12a"} err="failed to get container status \"e96c05fbdda4795e1099fb6ea7d7efe6e7562cf2c114b421143e68648b79f12a\": rpc error: code = NotFound desc = could not find container \"e96c05fbdda4795e1099fb6ea7d7efe6e7562cf2c114b421143e68648b79f12a\": container with ID starting with e96c05fbdda4795e1099fb6ea7d7efe6e7562cf2c114b421143e68648b79f12a not found: ID does not exist" Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.476867 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/002e0bf5-06f8-4f06-adc7-76e228f27480-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "002e0bf5-06f8-4f06-adc7-76e228f27480" (UID: "002e0bf5-06f8-4f06-adc7-76e228f27480"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.529896 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/002e0bf5-06f8-4f06-adc7-76e228f27480-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.699308 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-grjxw"] Feb 02 11:47:31 crc kubenswrapper[4925]: I0202 11:47:31.707047 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-grjxw"] Feb 02 11:47:32 crc kubenswrapper[4925]: I0202 11:47:32.677953 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="002e0bf5-06f8-4f06-adc7-76e228f27480" path="/var/lib/kubelet/pods/002e0bf5-06f8-4f06-adc7-76e228f27480/volumes" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.521421 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 11:47:36 crc kubenswrapper[4925]: E0202 11:47:36.522184 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="002e0bf5-06f8-4f06-adc7-76e228f27480" containerName="extract-utilities" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.522205 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="002e0bf5-06f8-4f06-adc7-76e228f27480" containerName="extract-utilities" Feb 02 11:47:36 crc kubenswrapper[4925]: E0202 11:47:36.522223 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="374c9a22-b870-43ee-a27a-499a0d607e32" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.522233 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="374c9a22-b870-43ee-a27a-499a0d607e32" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 02 11:47:36 crc kubenswrapper[4925]: E0202 11:47:36.522257 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="002e0bf5-06f8-4f06-adc7-76e228f27480" containerName="registry-server" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.522263 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="002e0bf5-06f8-4f06-adc7-76e228f27480" containerName="registry-server" Feb 02 11:47:36 crc kubenswrapper[4925]: E0202 11:47:36.522272 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="002e0bf5-06f8-4f06-adc7-76e228f27480" containerName="extract-content" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.522277 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="002e0bf5-06f8-4f06-adc7-76e228f27480" containerName="extract-content" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.522455 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="002e0bf5-06f8-4f06-adc7-76e228f27480" containerName="registry-server" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.522469 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="374c9a22-b870-43ee-a27a-499a0d607e32" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.523416 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.530430 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.533304 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.538422 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.608388 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.608453 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-sys\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.608502 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.608527 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.608630 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-run\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.608671 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-dev\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.608697 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.608733 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.608752 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvkrz\" (UniqueName: \"kubernetes.io/projected/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-kube-api-access-vvkrz\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.608780 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.608870 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.608954 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.608989 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.609007 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.609035 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.609110 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.609657 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.611011 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.612681 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.624740 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.710511 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.710567 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-run\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.710599 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-etc-nvme\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.710633 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.710675 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-sys\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.710746 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.710905 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/161c8104-b092-42f2-8e76-513b0e7991d6-ceph\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.710947 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-sys\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.710969 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-dev\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.710993 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711031 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkxkj\" (UniqueName: \"kubernetes.io/projected/161c8104-b092-42f2-8e76-513b0e7991d6-kube-api-access-vkxkj\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711088 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/161c8104-b092-42f2-8e76-513b0e7991d6-config-data-custom\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711114 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711138 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711173 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-run\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711195 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711231 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-dev\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711265 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711293 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711337 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711362 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvkrz\" (UniqueName: \"kubernetes.io/projected/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-kube-api-access-vvkrz\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711397 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711442 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711480 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711503 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-lib-modules\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711531 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-run\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711539 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.710836 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.710862 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711638 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-sys\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711705 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/161c8104-b092-42f2-8e76-513b0e7991d6-config-data\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711782 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711822 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711846 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161c8104-b092-42f2-8e76-513b0e7991d6-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.711881 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.712007 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.712022 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-dev\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.712098 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.712157 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.712427 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.712493 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.712497 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/161c8104-b092-42f2-8e76-513b0e7991d6-scripts\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.717745 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.717901 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.719178 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.719690 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.727393 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.727432 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvkrz\" (UniqueName: \"kubernetes.io/projected/0e7f2d02-9fa4-4e06-a6ae-77c1390e574b-kube-api-access-vvkrz\") pod \"cinder-volume-volume1-0\" (UID: \"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.813948 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814028 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814061 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-lib-modules\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814106 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/161c8104-b092-42f2-8e76-513b0e7991d6-config-data\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814129 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161c8104-b092-42f2-8e76-513b0e7991d6-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814157 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/161c8104-b092-42f2-8e76-513b0e7991d6-scripts\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814194 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814219 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-run\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814248 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-etc-nvme\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814270 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-sys\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814287 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/161c8104-b092-42f2-8e76-513b0e7991d6-ceph\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814314 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-dev\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814333 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814362 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkxkj\" (UniqueName: \"kubernetes.io/projected/161c8104-b092-42f2-8e76-513b0e7991d6-kube-api-access-vkxkj\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814383 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/161c8104-b092-42f2-8e76-513b0e7991d6-config-data-custom\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814413 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814505 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814571 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814613 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.814634 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-lib-modules\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.815367 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-sys\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.815464 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.815685 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-etc-nvme\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.815712 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-dev\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.815728 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-run\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.815783 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/161c8104-b092-42f2-8e76-513b0e7991d6-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.821379 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/161c8104-b092-42f2-8e76-513b0e7991d6-config-data\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.822006 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161c8104-b092-42f2-8e76-513b0e7991d6-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.822342 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/161c8104-b092-42f2-8e76-513b0e7991d6-ceph\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.824911 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/161c8104-b092-42f2-8e76-513b0e7991d6-scripts\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.830936 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/161c8104-b092-42f2-8e76-513b0e7991d6-config-data-custom\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.838663 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkxkj\" (UniqueName: \"kubernetes.io/projected/161c8104-b092-42f2-8e76-513b0e7991d6-kube-api-access-vkxkj\") pod \"cinder-backup-0\" (UID: \"161c8104-b092-42f2-8e76-513b0e7991d6\") " pod="openstack/cinder-backup-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.843922 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:36 crc kubenswrapper[4925]: I0202 11:47:36.928015 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.066560 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-jvc5j"] Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.067891 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-jvc5j" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.122418 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce8c3f23-4314-490d-9fa5-0754abe083a1-operator-scripts\") pod \"manila-db-create-jvc5j\" (UID: \"ce8c3f23-4314-490d-9fa5-0754abe083a1\") " pod="openstack/manila-db-create-jvc5j" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.122624 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t9ld\" (UniqueName: \"kubernetes.io/projected/ce8c3f23-4314-490d-9fa5-0754abe083a1-kube-api-access-8t9ld\") pod \"manila-db-create-jvc5j\" (UID: \"ce8c3f23-4314-490d-9fa5-0754abe083a1\") " pod="openstack/manila-db-create-jvc5j" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.127884 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-jvc5j"] Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.195769 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-7be6-account-create-update-zxh47"] Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.197284 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-7be6-account-create-update-zxh47" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.199585 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.215144 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-7be6-account-create-update-zxh47"] Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.224539 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cd59fb695-b7252"] Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.225357 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t9ld\" (UniqueName: \"kubernetes.io/projected/ce8c3f23-4314-490d-9fa5-0754abe083a1-kube-api-access-8t9ld\") pod \"manila-db-create-jvc5j\" (UID: \"ce8c3f23-4314-490d-9fa5-0754abe083a1\") " pod="openstack/manila-db-create-jvc5j" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.225459 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgzrq\" (UniqueName: \"kubernetes.io/projected/5ed10fe1-fca8-4488-97a4-15e007cba9a0-kube-api-access-bgzrq\") pod \"manila-7be6-account-create-update-zxh47\" (UID: \"5ed10fe1-fca8-4488-97a4-15e007cba9a0\") " pod="openstack/manila-7be6-account-create-update-zxh47" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.225880 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce8c3f23-4314-490d-9fa5-0754abe083a1-operator-scripts\") pod \"manila-db-create-jvc5j\" (UID: \"ce8c3f23-4314-490d-9fa5-0754abe083a1\") " pod="openstack/manila-db-create-jvc5j" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.225912 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed10fe1-fca8-4488-97a4-15e007cba9a0-operator-scripts\") pod \"manila-7be6-account-create-update-zxh47\" (UID: \"5ed10fe1-fca8-4488-97a4-15e007cba9a0\") " pod="openstack/manila-7be6-account-create-update-zxh47" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.226623 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.226852 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce8c3f23-4314-490d-9fa5-0754abe083a1-operator-scripts\") pod \"manila-db-create-jvc5j\" (UID: \"ce8c3f23-4314-490d-9fa5-0754abe083a1\") " pod="openstack/manila-db-create-jvc5j" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.230579 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-65szk" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.230745 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.230864 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.231114 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.251541 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cd59fb695-b7252"] Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.253532 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t9ld\" (UniqueName: \"kubernetes.io/projected/ce8c3f23-4314-490d-9fa5-0754abe083a1-kube-api-access-8t9ld\") pod \"manila-db-create-jvc5j\" (UID: \"ce8c3f23-4314-490d-9fa5-0754abe083a1\") " pod="openstack/manila-db-create-jvc5j" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.317649 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5977b94d89-jwxrq"] Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.331840 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.334573 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2165d1e8-2b5c-4b4d-b55f-d2280523c022-horizon-secret-key\") pod \"horizon-6cd59fb695-b7252\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.334741 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2165d1e8-2b5c-4b4d-b55f-d2280523c022-scripts\") pod \"horizon-6cd59fb695-b7252\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.334822 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2165d1e8-2b5c-4b4d-b55f-d2280523c022-config-data\") pod \"horizon-6cd59fb695-b7252\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.334911 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgzrq\" (UniqueName: \"kubernetes.io/projected/5ed10fe1-fca8-4488-97a4-15e007cba9a0-kube-api-access-bgzrq\") pod \"manila-7be6-account-create-update-zxh47\" (UID: \"5ed10fe1-fca8-4488-97a4-15e007cba9a0\") " pod="openstack/manila-7be6-account-create-update-zxh47" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.335042 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmkf2\" (UniqueName: \"kubernetes.io/projected/2165d1e8-2b5c-4b4d-b55f-d2280523c022-kube-api-access-cmkf2\") pod \"horizon-6cd59fb695-b7252\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.335135 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2165d1e8-2b5c-4b4d-b55f-d2280523c022-logs\") pod \"horizon-6cd59fb695-b7252\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.335204 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed10fe1-fca8-4488-97a4-15e007cba9a0-operator-scripts\") pod \"manila-7be6-account-create-update-zxh47\" (UID: \"5ed10fe1-fca8-4488-97a4-15e007cba9a0\") " pod="openstack/manila-7be6-account-create-update-zxh47" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.335834 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed10fe1-fca8-4488-97a4-15e007cba9a0-operator-scripts\") pod \"manila-7be6-account-create-update-zxh47\" (UID: \"5ed10fe1-fca8-4488-97a4-15e007cba9a0\") " pod="openstack/manila-7be6-account-create-update-zxh47" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.362857 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5977b94d89-jwxrq"] Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.379626 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.381017 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.387859 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgzrq\" (UniqueName: \"kubernetes.io/projected/5ed10fe1-fca8-4488-97a4-15e007cba9a0-kube-api-access-bgzrq\") pod \"manila-7be6-account-create-update-zxh47\" (UID: \"5ed10fe1-fca8-4488-97a4-15e007cba9a0\") " pod="openstack/manila-7be6-account-create-update-zxh47" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.388612 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.388889 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.389110 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-s2xj8" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.389316 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.393018 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.410147 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-jvc5j" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.439676 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2165d1e8-2b5c-4b4d-b55f-d2280523c022-horizon-secret-key\") pod \"horizon-6cd59fb695-b7252\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.440239 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2165d1e8-2b5c-4b4d-b55f-d2280523c022-scripts\") pod \"horizon-6cd59fb695-b7252\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.440268 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2165d1e8-2b5c-4b4d-b55f-d2280523c022-config-data\") pod \"horizon-6cd59fb695-b7252\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.440423 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmkf2\" (UniqueName: \"kubernetes.io/projected/2165d1e8-2b5c-4b4d-b55f-d2280523c022-kube-api-access-cmkf2\") pod \"horizon-6cd59fb695-b7252\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.440500 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2165d1e8-2b5c-4b4d-b55f-d2280523c022-logs\") pod \"horizon-6cd59fb695-b7252\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.441106 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2165d1e8-2b5c-4b4d-b55f-d2280523c022-logs\") pod \"horizon-6cd59fb695-b7252\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.442597 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2165d1e8-2b5c-4b4d-b55f-d2280523c022-config-data\") pod \"horizon-6cd59fb695-b7252\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.444680 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2165d1e8-2b5c-4b4d-b55f-d2280523c022-scripts\") pod \"horizon-6cd59fb695-b7252\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.446802 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2165d1e8-2b5c-4b4d-b55f-d2280523c022-horizon-secret-key\") pod \"horizon-6cd59fb695-b7252\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.470060 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.484663 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmkf2\" (UniqueName: \"kubernetes.io/projected/2165d1e8-2b5c-4b4d-b55f-d2280523c022-kube-api-access-cmkf2\") pod \"horizon-6cd59fb695-b7252\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.485468 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.488524 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.488745 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.490866 4925 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/glance-default-external-api-0" oldPodUID="320bb80d-d20b-497e-8188-ed2f7aff22ed" podUID="4bccddf7-0f96-4606-8a03-1cc6a2b15f91" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.508275 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:47:37 crc kubenswrapper[4925]: E0202 11:47:37.513465 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceph combined-ca-bundle config-data glance httpd-run kube-api-access-5t4fg logs public-tls-certs scripts], unattached volumes=[], failed to process volumes=[ceph combined-ca-bundle config-data glance httpd-run kube-api-access-5t4fg logs public-tls-certs scripts]: context canceled" pod="openstack/glance-default-external-api-0" podUID="320bb80d-d20b-497e-8188-ed2f7aff22ed" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.521401 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-7be6-account-create-update-zxh47" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.543942 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b6bd22b-c4b9-407f-993c-4132ca172b06-horizon-secret-key\") pod \"horizon-5977b94d89-jwxrq\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.544063 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.544152 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b6bd22b-c4b9-407f-993c-4132ca172b06-scripts\") pod \"horizon-5977b94d89-jwxrq\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.544203 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-logs\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.544236 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-ceph\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.544277 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.544337 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-config-data\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.544355 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47rl9\" (UniqueName: \"kubernetes.io/projected/6b6bd22b-c4b9-407f-993c-4132ca172b06-kube-api-access-47rl9\") pod \"horizon-5977b94d89-jwxrq\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.544402 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.544420 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b6bd22b-c4b9-407f-993c-4132ca172b06-logs\") pod \"horizon-5977b94d89-jwxrq\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.544444 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d89k\" (UniqueName: \"kubernetes.io/projected/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-kube-api-access-6d89k\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.544466 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-scripts\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.544481 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b6bd22b-c4b9-407f-993c-4132ca172b06-config-data\") pod \"horizon-5977b94d89-jwxrq\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.544507 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.544857 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.550241 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.561536 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.564762 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.576137 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.620589 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.646265 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b6bd22b-c4b9-407f-993c-4132ca172b06-scripts\") pod \"horizon-5977b94d89-jwxrq\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.646337 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-logs\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.646378 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-ceph\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.646445 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.646537 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-config-data\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.646583 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47rl9\" (UniqueName: \"kubernetes.io/projected/6b6bd22b-c4b9-407f-993c-4132ca172b06-kube-api-access-47rl9\") pod \"horizon-5977b94d89-jwxrq\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.646608 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.646625 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b6bd22b-c4b9-407f-993c-4132ca172b06-logs\") pod \"horizon-5977b94d89-jwxrq\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.646689 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d89k\" (UniqueName: \"kubernetes.io/projected/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-kube-api-access-6d89k\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.646747 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-scripts\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.646768 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b6bd22b-c4b9-407f-993c-4132ca172b06-config-data\") pod \"horizon-5977b94d89-jwxrq\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.646826 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.646908 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b6bd22b-c4b9-407f-993c-4132ca172b06-horizon-secret-key\") pod \"horizon-5977b94d89-jwxrq\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.646990 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.648852 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b6bd22b-c4b9-407f-993c-4132ca172b06-logs\") pod \"horizon-5977b94d89-jwxrq\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.649483 4925 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.651358 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b6bd22b-c4b9-407f-993c-4132ca172b06-scripts\") pod \"horizon-5977b94d89-jwxrq\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.651709 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b6bd22b-c4b9-407f-993c-4132ca172b06-config-data\") pod \"horizon-5977b94d89-jwxrq\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.651760 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-logs\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.651990 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.656642 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-ceph\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.657429 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-config-data\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.658188 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b6bd22b-c4b9-407f-993c-4132ca172b06-horizon-secret-key\") pod \"horizon-5977b94d89-jwxrq\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.658680 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.664984 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.665899 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-scripts\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.684156 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d89k\" (UniqueName: \"kubernetes.io/projected/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-kube-api-access-6d89k\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.688759 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47rl9\" (UniqueName: \"kubernetes.io/projected/6b6bd22b-c4b9-407f-993c-4132ca172b06-kube-api-access-47rl9\") pod \"horizon-5977b94d89-jwxrq\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.689756 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.690437 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.739064 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.750333 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhmrd\" (UniqueName: \"kubernetes.io/projected/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-kube-api-access-bhmrd\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.750400 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-logs\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.750514 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.750566 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-ceph\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.750604 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-config-data\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.750658 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.750685 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.750836 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-scripts\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.751093 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.830035 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.853466 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-config-data\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.853619 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.853644 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.853670 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-scripts\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.853799 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.853878 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhmrd\" (UniqueName: \"kubernetes.io/projected/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-kube-api-access-bhmrd\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.853902 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-logs\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.853982 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.854044 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-ceph\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.857469 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-logs\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.858377 4925 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.858725 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.860490 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.861467 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-config-data\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.869806 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-ceph\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.871624 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-scripts\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.882461 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.898022 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhmrd\" (UniqueName: \"kubernetes.io/projected/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-kube-api-access-bhmrd\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:37 crc kubenswrapper[4925]: I0202 11:47:37.958237 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.004718 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-jvc5j"] Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.194231 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.277966 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-7be6-account-create-update-zxh47"] Feb 02 11:47:38 crc kubenswrapper[4925]: W0202 11:47:38.295438 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ed10fe1_fca8_4488_97a4_15e007cba9a0.slice/crio-f10335b3a8b6db2213b9dc73d28188a0b8e7ea99f5ad0a197c85f162c6e28aa7 WatchSource:0}: Error finding container f10335b3a8b6db2213b9dc73d28188a0b8e7ea99f5ad0a197c85f162c6e28aa7: Status 404 returned error can't find the container with id f10335b3a8b6db2213b9dc73d28188a0b8e7ea99f5ad0a197c85f162c6e28aa7 Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.375658 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5977b94d89-jwxrq"] Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.410998 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cd59fb695-b7252"] Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.462874 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-jvc5j" event={"ID":"ce8c3f23-4314-490d-9fa5-0754abe083a1","Type":"ContainerStarted","Data":"c61791d06afe45bcdd3c165e44cf44f79edb7f503237a8048f374dd783b2e562"} Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.462936 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-jvc5j" event={"ID":"ce8c3f23-4314-490d-9fa5-0754abe083a1","Type":"ContainerStarted","Data":"d7c0669e2732fa00a29cd49229137081d56c167346891956d1545253340ca92a"} Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.479234 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b","Type":"ContainerStarted","Data":"d7b34b340192a1fd3729ae9d8b8e6e2e430938daed61af7e4fbea0899a4d3a68"} Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.499533 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-7be6-account-create-update-zxh47" event={"ID":"5ed10fe1-fca8-4488-97a4-15e007cba9a0","Type":"ContainerStarted","Data":"f10335b3a8b6db2213b9dc73d28188a0b8e7ea99f5ad0a197c85f162c6e28aa7"} Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.502461 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.503067 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"161c8104-b092-42f2-8e76-513b0e7991d6","Type":"ContainerStarted","Data":"245e284d3b58b3b0565a83978eba279407c2745746b36138bf9baa64c5caead4"} Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.504927 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-create-jvc5j" podStartSLOduration=1.504902565 podStartE2EDuration="1.504902565s" podCreationTimestamp="2026-02-02 11:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:47:38.484688977 +0000 UTC m=+3035.488937939" watchObservedRunningTime="2026-02-02 11:47:38.504902565 +0000 UTC m=+3035.509151547" Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.508023 4925 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/glance-default-external-api-0" oldPodUID="320bb80d-d20b-497e-8188-ed2f7aff22ed" podUID="4bccddf7-0f96-4606-8a03-1cc6a2b15f91" Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.554730 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.589165 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.597320 4925 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/glance-default-external-api-0" oldPodUID="320bb80d-d20b-497e-8188-ed2f7aff22ed" podUID="4bccddf7-0f96-4606-8a03-1cc6a2b15f91" Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.681611 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="320bb80d-d20b-497e-8188-ed2f7aff22ed" path="/var/lib/kubelet/pods/320bb80d-d20b-497e-8188-ed2f7aff22ed/volumes" Feb 02 11:47:38 crc kubenswrapper[4925]: I0202 11:47:38.882842 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:47:38 crc kubenswrapper[4925]: W0202 11:47:38.913066 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bccddf7_0f96_4606_8a03_1cc6a2b15f91.slice/crio-8383721b56216ab4dd9e8b47ac57eae514f1d36ea80dde30f5f458349ec99bd7 WatchSource:0}: Error finding container 8383721b56216ab4dd9e8b47ac57eae514f1d36ea80dde30f5f458349ec99bd7: Status 404 returned error can't find the container with id 8383721b56216ab4dd9e8b47ac57eae514f1d36ea80dde30f5f458349ec99bd7 Feb 02 11:47:39 crc kubenswrapper[4925]: I0202 11:47:39.530650 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4bccddf7-0f96-4606-8a03-1cc6a2b15f91","Type":"ContainerStarted","Data":"8383721b56216ab4dd9e8b47ac57eae514f1d36ea80dde30f5f458349ec99bd7"} Feb 02 11:47:39 crc kubenswrapper[4925]: I0202 11:47:39.537638 4925 generic.go:334] "Generic (PLEG): container finished" podID="ce8c3f23-4314-490d-9fa5-0754abe083a1" containerID="c61791d06afe45bcdd3c165e44cf44f79edb7f503237a8048f374dd783b2e562" exitCode=0 Feb 02 11:47:39 crc kubenswrapper[4925]: I0202 11:47:39.537755 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-jvc5j" event={"ID":"ce8c3f23-4314-490d-9fa5-0754abe083a1","Type":"ContainerDied","Data":"c61791d06afe45bcdd3c165e44cf44f79edb7f503237a8048f374dd783b2e562"} Feb 02 11:47:39 crc kubenswrapper[4925]: I0202 11:47:39.547261 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b","Type":"ContainerStarted","Data":"24b84033ab365467c7c795954c6182a75c60852148f0af8104576e251149fb6a"} Feb 02 11:47:39 crc kubenswrapper[4925]: I0202 11:47:39.547310 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"0e7f2d02-9fa4-4e06-a6ae-77c1390e574b","Type":"ContainerStarted","Data":"6f090ce9f3b384f95018cbdc72a841d0b7b23a35774894cf67a5e13c1a149eac"} Feb 02 11:47:39 crc kubenswrapper[4925]: I0202 11:47:39.549440 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08fcbea0-ddb9-4268-a9fc-a863e59f8c22","Type":"ContainerStarted","Data":"14687c2df1972241cb05ae11e655bb018dbb353754b0e330edf16178e13296ff"} Feb 02 11:47:39 crc kubenswrapper[4925]: I0202 11:47:39.551928 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5977b94d89-jwxrq" event={"ID":"6b6bd22b-c4b9-407f-993c-4132ca172b06","Type":"ContainerStarted","Data":"d521b4b484e2a34e1967e9a9132d844712c6f499f49f96ccccb9d17d24c81a10"} Feb 02 11:47:39 crc kubenswrapper[4925]: I0202 11:47:39.567207 4925 generic.go:334] "Generic (PLEG): container finished" podID="5ed10fe1-fca8-4488-97a4-15e007cba9a0" containerID="d433a9ed6e8e4ffc96912b0b19cd6d0d7ea944859130388cddd1214bd2c8154c" exitCode=0 Feb 02 11:47:39 crc kubenswrapper[4925]: I0202 11:47:39.567306 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-7be6-account-create-update-zxh47" event={"ID":"5ed10fe1-fca8-4488-97a4-15e007cba9a0","Type":"ContainerDied","Data":"d433a9ed6e8e4ffc96912b0b19cd6d0d7ea944859130388cddd1214bd2c8154c"} Feb 02 11:47:39 crc kubenswrapper[4925]: I0202 11:47:39.591867 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"161c8104-b092-42f2-8e76-513b0e7991d6","Type":"ContainerStarted","Data":"47d92df0d9f51a6c5ac83179fa698a1e2cf6d37514d62ece474ef2d41e394f48"} Feb 02 11:47:39 crc kubenswrapper[4925]: I0202 11:47:39.593166 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:47:39 crc kubenswrapper[4925]: I0202 11:47:39.593249 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cd59fb695-b7252" event={"ID":"2165d1e8-2b5c-4b4d-b55f-d2280523c022","Type":"ContainerStarted","Data":"be2081f3e44356e125ab0ee953d112feaf1bb87a41a064623267eee2eb684dd7"} Feb 02 11:47:39 crc kubenswrapper[4925]: I0202 11:47:39.608846 4925 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/glance-default-external-api-0" oldPodUID="320bb80d-d20b-497e-8188-ed2f7aff22ed" podUID="4bccddf7-0f96-4606-8a03-1cc6a2b15f91" Feb 02 11:47:39 crc kubenswrapper[4925]: I0202 11:47:39.613152 4925 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/glance-default-external-api-0" oldPodUID="320bb80d-d20b-497e-8188-ed2f7aff22ed" podUID="4bccddf7-0f96-4606-8a03-1cc6a2b15f91" Feb 02 11:47:39 crc kubenswrapper[4925]: I0202 11:47:39.613739 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.720870178 podStartE2EDuration="3.613715215s" podCreationTimestamp="2026-02-02 11:47:36 +0000 UTC" firstStartedPulling="2026-02-02 11:47:37.632803081 +0000 UTC m=+3034.637052043" lastFinishedPulling="2026-02-02 11:47:38.525648118 +0000 UTC m=+3035.529897080" observedRunningTime="2026-02-02 11:47:39.580515254 +0000 UTC m=+3036.584764236" watchObservedRunningTime="2026-02-02 11:47:39.613715215 +0000 UTC m=+3036.617964187" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.020250 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5977b94d89-jwxrq"] Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.130314 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-96c5cb844-xrpsd"] Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.132224 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.149288 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.181237 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.208639 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-96c5cb844-xrpsd"] Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.279153 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6cd59fb695-b7252"] Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.326548 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-combined-ca-bundle\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.326930 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4qhp\" (UniqueName: \"kubernetes.io/projected/1315a531-ca20-494e-9273-dfa832b62744-kube-api-access-v4qhp\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.327175 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-horizon-secret-key\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.327366 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1315a531-ca20-494e-9273-dfa832b62744-config-data\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.327480 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1315a531-ca20-494e-9273-dfa832b62744-logs\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.327617 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-horizon-tls-certs\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.327727 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1315a531-ca20-494e-9273-dfa832b62744-scripts\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.337538 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.378002 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-c6d58558b-gh6c8"] Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.379940 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.390707 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c6d58558b-gh6c8"] Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.429055 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1315a531-ca20-494e-9273-dfa832b62744-config-data\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.429153 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1315a531-ca20-494e-9273-dfa832b62744-logs\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.429518 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acc24fd1-e3f5-4235-9190-c9aad51e4282-logs\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.429571 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc24fd1-e3f5-4235-9190-c9aad51e4282-combined-ca-bundle\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.429611 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-horizon-tls-certs\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.429660 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1315a531-ca20-494e-9273-dfa832b62744-scripts\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.429718 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acc24fd1-e3f5-4235-9190-c9aad51e4282-config-data\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.429768 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-combined-ca-bundle\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.429821 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/acc24fd1-e3f5-4235-9190-c9aad51e4282-horizon-secret-key\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.429875 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4qhp\" (UniqueName: \"kubernetes.io/projected/1315a531-ca20-494e-9273-dfa832b62744-kube-api-access-v4qhp\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.429949 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48gsb\" (UniqueName: \"kubernetes.io/projected/acc24fd1-e3f5-4235-9190-c9aad51e4282-kube-api-access-48gsb\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.430031 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/acc24fd1-e3f5-4235-9190-c9aad51e4282-horizon-tls-certs\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.430109 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-horizon-secret-key\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.430131 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acc24fd1-e3f5-4235-9190-c9aad51e4282-scripts\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.438008 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1315a531-ca20-494e-9273-dfa832b62744-config-data\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.439384 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1315a531-ca20-494e-9273-dfa832b62744-logs\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.440842 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1315a531-ca20-494e-9273-dfa832b62744-scripts\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.460423 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-combined-ca-bundle\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.464560 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-horizon-tls-certs\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.483615 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-horizon-secret-key\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.503790 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4qhp\" (UniqueName: \"kubernetes.io/projected/1315a531-ca20-494e-9273-dfa832b62744-kube-api-access-v4qhp\") pod \"horizon-96c5cb844-xrpsd\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.534184 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acc24fd1-e3f5-4235-9190-c9aad51e4282-logs\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.534237 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc24fd1-e3f5-4235-9190-c9aad51e4282-combined-ca-bundle\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.534295 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acc24fd1-e3f5-4235-9190-c9aad51e4282-config-data\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.534342 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/acc24fd1-e3f5-4235-9190-c9aad51e4282-horizon-secret-key\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.534384 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48gsb\" (UniqueName: \"kubernetes.io/projected/acc24fd1-e3f5-4235-9190-c9aad51e4282-kube-api-access-48gsb\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.534423 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/acc24fd1-e3f5-4235-9190-c9aad51e4282-horizon-tls-certs\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.534458 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acc24fd1-e3f5-4235-9190-c9aad51e4282-scripts\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.535455 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acc24fd1-e3f5-4235-9190-c9aad51e4282-scripts\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.536547 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acc24fd1-e3f5-4235-9190-c9aad51e4282-config-data\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.536905 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acc24fd1-e3f5-4235-9190-c9aad51e4282-logs\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.542366 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/acc24fd1-e3f5-4235-9190-c9aad51e4282-horizon-secret-key\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.550313 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/acc24fd1-e3f5-4235-9190-c9aad51e4282-horizon-tls-certs\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.557692 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48gsb\" (UniqueName: \"kubernetes.io/projected/acc24fd1-e3f5-4235-9190-c9aad51e4282-kube-api-access-48gsb\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.567514 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc24fd1-e3f5-4235-9190-c9aad51e4282-combined-ca-bundle\") pod \"horizon-c6d58558b-gh6c8\" (UID: \"acc24fd1-e3f5-4235-9190-c9aad51e4282\") " pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.598514 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.626086 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4bccddf7-0f96-4606-8a03-1cc6a2b15f91","Type":"ContainerStarted","Data":"2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780"} Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.633424 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08fcbea0-ddb9-4268-a9fc-a863e59f8c22","Type":"ContainerStarted","Data":"b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413"} Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.638752 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"161c8104-b092-42f2-8e76-513b0e7991d6","Type":"ContainerStarted","Data":"45c12f71134f497d5fae652265e353f1ca71a75f79c29909f649c23d6b5ef56c"} Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.686904 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.628585568 podStartE2EDuration="4.686878217s" podCreationTimestamp="2026-02-02 11:47:36 +0000 UTC" firstStartedPulling="2026-02-02 11:47:37.859097884 +0000 UTC m=+3034.863346846" lastFinishedPulling="2026-02-02 11:47:38.917390533 +0000 UTC m=+3035.921639495" observedRunningTime="2026-02-02 11:47:40.683114854 +0000 UTC m=+3037.687363836" watchObservedRunningTime="2026-02-02 11:47:40.686878217 +0000 UTC m=+3037.691127189" Feb 02 11:47:40 crc kubenswrapper[4925]: I0202 11:47:40.803449 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.270335 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-7be6-account-create-update-zxh47" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.287362 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-jvc5j" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.458150 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c6d58558b-gh6c8"] Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.461477 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t9ld\" (UniqueName: \"kubernetes.io/projected/ce8c3f23-4314-490d-9fa5-0754abe083a1-kube-api-access-8t9ld\") pod \"ce8c3f23-4314-490d-9fa5-0754abe083a1\" (UID: \"ce8c3f23-4314-490d-9fa5-0754abe083a1\") " Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.461564 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed10fe1-fca8-4488-97a4-15e007cba9a0-operator-scripts\") pod \"5ed10fe1-fca8-4488-97a4-15e007cba9a0\" (UID: \"5ed10fe1-fca8-4488-97a4-15e007cba9a0\") " Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.461713 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce8c3f23-4314-490d-9fa5-0754abe083a1-operator-scripts\") pod \"ce8c3f23-4314-490d-9fa5-0754abe083a1\" (UID: \"ce8c3f23-4314-490d-9fa5-0754abe083a1\") " Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.461817 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgzrq\" (UniqueName: \"kubernetes.io/projected/5ed10fe1-fca8-4488-97a4-15e007cba9a0-kube-api-access-bgzrq\") pod \"5ed10fe1-fca8-4488-97a4-15e007cba9a0\" (UID: \"5ed10fe1-fca8-4488-97a4-15e007cba9a0\") " Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.462753 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ed10fe1-fca8-4488-97a4-15e007cba9a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ed10fe1-fca8-4488-97a4-15e007cba9a0" (UID: "5ed10fe1-fca8-4488-97a4-15e007cba9a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.463280 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce8c3f23-4314-490d-9fa5-0754abe083a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce8c3f23-4314-490d-9fa5-0754abe083a1" (UID: "ce8c3f23-4314-490d-9fa5-0754abe083a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.468098 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ed10fe1-fca8-4488-97a4-15e007cba9a0-kube-api-access-bgzrq" (OuterVolumeSpecName: "kube-api-access-bgzrq") pod "5ed10fe1-fca8-4488-97a4-15e007cba9a0" (UID: "5ed10fe1-fca8-4488-97a4-15e007cba9a0"). InnerVolumeSpecName "kube-api-access-bgzrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.468936 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce8c3f23-4314-490d-9fa5-0754abe083a1-kube-api-access-8t9ld" (OuterVolumeSpecName: "kube-api-access-8t9ld") pod "ce8c3f23-4314-490d-9fa5-0754abe083a1" (UID: "ce8c3f23-4314-490d-9fa5-0754abe083a1"). InnerVolumeSpecName "kube-api-access-8t9ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.564240 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgzrq\" (UniqueName: \"kubernetes.io/projected/5ed10fe1-fca8-4488-97a4-15e007cba9a0-kube-api-access-bgzrq\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.564600 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t9ld\" (UniqueName: \"kubernetes.io/projected/ce8c3f23-4314-490d-9fa5-0754abe083a1-kube-api-access-8t9ld\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.564616 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ed10fe1-fca8-4488-97a4-15e007cba9a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.564627 4925 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce8c3f23-4314-490d-9fa5-0754abe083a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.694554 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-96c5cb844-xrpsd"] Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.696883 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-7be6-account-create-update-zxh47" event={"ID":"5ed10fe1-fca8-4488-97a4-15e007cba9a0","Type":"ContainerDied","Data":"f10335b3a8b6db2213b9dc73d28188a0b8e7ea99f5ad0a197c85f162c6e28aa7"} Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.696980 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f10335b3a8b6db2213b9dc73d28188a0b8e7ea99f5ad0a197c85f162c6e28aa7" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.696900 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-7be6-account-create-update-zxh47" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.708689 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4bccddf7-0f96-4606-8a03-1cc6a2b15f91","Type":"ContainerStarted","Data":"72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85"} Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.708837 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4bccddf7-0f96-4606-8a03-1cc6a2b15f91" containerName="glance-log" containerID="cri-o://2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780" gracePeriod=30 Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.709204 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4bccddf7-0f96-4606-8a03-1cc6a2b15f91" containerName="glance-httpd" containerID="cri-o://72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85" gracePeriod=30 Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.712008 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c6d58558b-gh6c8" event={"ID":"acc24fd1-e3f5-4235-9190-c9aad51e4282","Type":"ContainerStarted","Data":"6125a551817543b21ae7b4dc5db467f44de3e151f6d9626e45efc18843898947"} Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.728145 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-jvc5j" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.728156 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-jvc5j" event={"ID":"ce8c3f23-4314-490d-9fa5-0754abe083a1","Type":"ContainerDied","Data":"d7c0669e2732fa00a29cd49229137081d56c167346891956d1545253340ca92a"} Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.728213 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c0669e2732fa00a29cd49229137081d56c167346891956d1545253340ca92a" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.742243 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.7422180449999995 podStartE2EDuration="4.742218045s" podCreationTimestamp="2026-02-02 11:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:47:41.734862535 +0000 UTC m=+3038.739111497" watchObservedRunningTime="2026-02-02 11:47:41.742218045 +0000 UTC m=+3038.746467007" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.766291 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08fcbea0-ddb9-4268-a9fc-a863e59f8c22","Type":"ContainerStarted","Data":"c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3"} Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.766347 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="08fcbea0-ddb9-4268-a9fc-a863e59f8c22" containerName="glance-log" containerID="cri-o://b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413" gracePeriod=30 Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.766774 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="08fcbea0-ddb9-4268-a9fc-a863e59f8c22" containerName="glance-httpd" containerID="cri-o://c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3" gracePeriod=30 Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.812486 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.812469282 podStartE2EDuration="4.812469282s" podCreationTimestamp="2026-02-02 11:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:47:41.794358491 +0000 UTC m=+3038.798607473" watchObservedRunningTime="2026-02-02 11:47:41.812469282 +0000 UTC m=+3038.816718244" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.844522 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:41 crc kubenswrapper[4925]: I0202 11:47:41.933225 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 02 11:47:42 crc kubenswrapper[4925]: E0202 11:47:42.247973 4925 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bccddf7_0f96_4606_8a03_1cc6a2b15f91.slice/crio-conmon-72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85.scope\": RecentStats: unable to find data in memory cache]" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.461161 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.590822 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-ceph\") pod \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.590853 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-scripts\") pod \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.590879 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d89k\" (UniqueName: \"kubernetes.io/projected/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-kube-api-access-6d89k\") pod \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.590906 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-httpd-run\") pod \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.590941 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-config-data\") pod \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.591036 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.591052 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-combined-ca-bundle\") pod \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.591192 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-internal-tls-certs\") pod \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.591228 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-logs\") pod \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\" (UID: \"08fcbea0-ddb9-4268-a9fc-a863e59f8c22\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.592868 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-logs" (OuterVolumeSpecName: "logs") pod "08fcbea0-ddb9-4268-a9fc-a863e59f8c22" (UID: "08fcbea0-ddb9-4268-a9fc-a863e59f8c22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.593214 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "08fcbea0-ddb9-4268-a9fc-a863e59f8c22" (UID: "08fcbea0-ddb9-4268-a9fc-a863e59f8c22"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.598805 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "08fcbea0-ddb9-4268-a9fc-a863e59f8c22" (UID: "08fcbea0-ddb9-4268-a9fc-a863e59f8c22"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.600017 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-ceph" (OuterVolumeSpecName: "ceph") pod "08fcbea0-ddb9-4268-a9fc-a863e59f8c22" (UID: "08fcbea0-ddb9-4268-a9fc-a863e59f8c22"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.601686 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-kube-api-access-6d89k" (OuterVolumeSpecName: "kube-api-access-6d89k") pod "08fcbea0-ddb9-4268-a9fc-a863e59f8c22" (UID: "08fcbea0-ddb9-4268-a9fc-a863e59f8c22"). InnerVolumeSpecName "kube-api-access-6d89k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.603060 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-scripts" (OuterVolumeSpecName: "scripts") pod "08fcbea0-ddb9-4268-a9fc-a863e59f8c22" (UID: "08fcbea0-ddb9-4268-a9fc-a863e59f8c22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.633164 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08fcbea0-ddb9-4268-a9fc-a863e59f8c22" (UID: "08fcbea0-ddb9-4268-a9fc-a863e59f8c22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.669207 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "08fcbea0-ddb9-4268-a9fc-a863e59f8c22" (UID: "08fcbea0-ddb9-4268-a9fc-a863e59f8c22"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.673922 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-config-data" (OuterVolumeSpecName: "config-data") pod "08fcbea0-ddb9-4268-a9fc-a863e59f8c22" (UID: "08fcbea0-ddb9-4268-a9fc-a863e59f8c22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.693479 4925 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.693516 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.693542 4925 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.693556 4925 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.693567 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.693580 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.693591 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d89k\" (UniqueName: \"kubernetes.io/projected/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-kube-api-access-6d89k\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.693602 4925 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.693612 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fcbea0-ddb9-4268-a9fc-a863e59f8c22-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.694690 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.748321 4925 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.792111 4925 generic.go:334] "Generic (PLEG): container finished" podID="4bccddf7-0f96-4606-8a03-1cc6a2b15f91" containerID="72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85" exitCode=0 Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.792146 4925 generic.go:334] "Generic (PLEG): container finished" podID="4bccddf7-0f96-4606-8a03-1cc6a2b15f91" containerID="2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780" exitCode=143 Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.792188 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4bccddf7-0f96-4606-8a03-1cc6a2b15f91","Type":"ContainerDied","Data":"72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85"} Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.792222 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4bccddf7-0f96-4606-8a03-1cc6a2b15f91","Type":"ContainerDied","Data":"2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780"} Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.792236 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4bccddf7-0f96-4606-8a03-1cc6a2b15f91","Type":"ContainerDied","Data":"8383721b56216ab4dd9e8b47ac57eae514f1d36ea80dde30f5f458349ec99bd7"} Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.792253 4925 scope.go:117] "RemoveContainer" containerID="72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.792383 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.794307 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-ceph\") pod \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.794470 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-httpd-run\") pod \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.794519 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.794567 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-logs\") pod \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.794603 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-combined-ca-bundle\") pod \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.794625 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhmrd\" (UniqueName: \"kubernetes.io/projected/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-kube-api-access-bhmrd\") pod \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.794648 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-scripts\") pod \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.794676 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-config-data\") pod \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.794714 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-public-tls-certs\") pod \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\" (UID: \"4bccddf7-0f96-4606-8a03-1cc6a2b15f91\") " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.795391 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-logs" (OuterVolumeSpecName: "logs") pod "4bccddf7-0f96-4606-8a03-1cc6a2b15f91" (UID: "4bccddf7-0f96-4606-8a03-1cc6a2b15f91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.795550 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4bccddf7-0f96-4606-8a03-1cc6a2b15f91" (UID: "4bccddf7-0f96-4606-8a03-1cc6a2b15f91"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.795906 4925 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.795926 4925 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.795936 4925 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.800383 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96c5cb844-xrpsd" event={"ID":"1315a531-ca20-494e-9273-dfa832b62744","Type":"ContainerStarted","Data":"a69f2658f791155a16b645b4f245ba9433b0ee4982bf9ab18d50f86e7e61c74f"} Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.801162 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "4bccddf7-0f96-4606-8a03-1cc6a2b15f91" (UID: "4bccddf7-0f96-4606-8a03-1cc6a2b15f91"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.803167 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-kube-api-access-bhmrd" (OuterVolumeSpecName: "kube-api-access-bhmrd") pod "4bccddf7-0f96-4606-8a03-1cc6a2b15f91" (UID: "4bccddf7-0f96-4606-8a03-1cc6a2b15f91"). InnerVolumeSpecName "kube-api-access-bhmrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.803673 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-scripts" (OuterVolumeSpecName: "scripts") pod "4bccddf7-0f96-4606-8a03-1cc6a2b15f91" (UID: "4bccddf7-0f96-4606-8a03-1cc6a2b15f91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.805479 4925 generic.go:334] "Generic (PLEG): container finished" podID="08fcbea0-ddb9-4268-a9fc-a863e59f8c22" containerID="c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3" exitCode=0 Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.805512 4925 generic.go:334] "Generic (PLEG): container finished" podID="08fcbea0-ddb9-4268-a9fc-a863e59f8c22" containerID="b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413" exitCode=143 Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.806518 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.807974 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08fcbea0-ddb9-4268-a9fc-a863e59f8c22","Type":"ContainerDied","Data":"c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3"} Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.808009 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08fcbea0-ddb9-4268-a9fc-a863e59f8c22","Type":"ContainerDied","Data":"b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413"} Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.808021 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08fcbea0-ddb9-4268-a9fc-a863e59f8c22","Type":"ContainerDied","Data":"14687c2df1972241cb05ae11e655bb018dbb353754b0e330edf16178e13296ff"} Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.808100 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-ceph" (OuterVolumeSpecName: "ceph") pod "4bccddf7-0f96-4606-8a03-1cc6a2b15f91" (UID: "4bccddf7-0f96-4606-8a03-1cc6a2b15f91"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.832910 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bccddf7-0f96-4606-8a03-1cc6a2b15f91" (UID: "4bccddf7-0f96-4606-8a03-1cc6a2b15f91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.861401 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-config-data" (OuterVolumeSpecName: "config-data") pod "4bccddf7-0f96-4606-8a03-1cc6a2b15f91" (UID: "4bccddf7-0f96-4606-8a03-1cc6a2b15f91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.873458 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4bccddf7-0f96-4606-8a03-1cc6a2b15f91" (UID: "4bccddf7-0f96-4606-8a03-1cc6a2b15f91"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.903952 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.904024 4925 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.904047 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.904064 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhmrd\" (UniqueName: \"kubernetes.io/projected/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-kube-api-access-bhmrd\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.904148 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.904164 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.904180 4925 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bccddf7-0f96-4606-8a03-1cc6a2b15f91-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.942859 4925 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 02 11:47:42 crc kubenswrapper[4925]: I0202 11:47:42.992327 4925 scope.go:117] "RemoveContainer" containerID="2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.005563 4925 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.019865 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.040772 4925 scope.go:117] "RemoveContainer" containerID="72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85" Feb 02 11:47:43 crc kubenswrapper[4925]: E0202 11:47:43.044756 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85\": container with ID starting with 72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85 not found: ID does not exist" containerID="72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.044808 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85"} err="failed to get container status \"72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85\": rpc error: code = NotFound desc = could not find container \"72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85\": container with ID starting with 72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85 not found: ID does not exist" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.044836 4925 scope.go:117] "RemoveContainer" containerID="2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780" Feb 02 11:47:43 crc kubenswrapper[4925]: E0202 11:47:43.045184 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780\": container with ID starting with 2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780 not found: ID does not exist" containerID="2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.045211 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780"} err="failed to get container status \"2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780\": rpc error: code = NotFound desc = could not find container \"2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780\": container with ID starting with 2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780 not found: ID does not exist" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.045229 4925 scope.go:117] "RemoveContainer" containerID="72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.045410 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85"} err="failed to get container status \"72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85\": rpc error: code = NotFound desc = could not find container \"72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85\": container with ID starting with 72f306c4a01749ef52b4db1de51b93c51e0f7b80dc634937ab00450c0c40ef85 not found: ID does not exist" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.045432 4925 scope.go:117] "RemoveContainer" containerID="2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.045591 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780"} err="failed to get container status \"2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780\": rpc error: code = NotFound desc = could not find container \"2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780\": container with ID starting with 2d7fbef6fccfab272d5bab54b97adf2f920af8255d54494f42b1b96ddde77780 not found: ID does not exist" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.045616 4925 scope.go:117] "RemoveContainer" containerID="c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.048697 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.062258 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:47:43 crc kubenswrapper[4925]: E0202 11:47:43.062738 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bccddf7-0f96-4606-8a03-1cc6a2b15f91" containerName="glance-log" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.062760 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bccddf7-0f96-4606-8a03-1cc6a2b15f91" containerName="glance-log" Feb 02 11:47:43 crc kubenswrapper[4925]: E0202 11:47:43.062777 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed10fe1-fca8-4488-97a4-15e007cba9a0" containerName="mariadb-account-create-update" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.062785 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed10fe1-fca8-4488-97a4-15e007cba9a0" containerName="mariadb-account-create-update" Feb 02 11:47:43 crc kubenswrapper[4925]: E0202 11:47:43.062797 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bccddf7-0f96-4606-8a03-1cc6a2b15f91" containerName="glance-httpd" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.062803 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bccddf7-0f96-4606-8a03-1cc6a2b15f91" containerName="glance-httpd" Feb 02 11:47:43 crc kubenswrapper[4925]: E0202 11:47:43.062819 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fcbea0-ddb9-4268-a9fc-a863e59f8c22" containerName="glance-log" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.062825 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fcbea0-ddb9-4268-a9fc-a863e59f8c22" containerName="glance-log" Feb 02 11:47:43 crc kubenswrapper[4925]: E0202 11:47:43.062835 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce8c3f23-4314-490d-9fa5-0754abe083a1" containerName="mariadb-database-create" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.062841 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce8c3f23-4314-490d-9fa5-0754abe083a1" containerName="mariadb-database-create" Feb 02 11:47:43 crc kubenswrapper[4925]: E0202 11:47:43.062851 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fcbea0-ddb9-4268-a9fc-a863e59f8c22" containerName="glance-httpd" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.062857 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fcbea0-ddb9-4268-a9fc-a863e59f8c22" containerName="glance-httpd" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.063006 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bccddf7-0f96-4606-8a03-1cc6a2b15f91" containerName="glance-httpd" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.063017 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed10fe1-fca8-4488-97a4-15e007cba9a0" containerName="mariadb-account-create-update" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.063029 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bccddf7-0f96-4606-8a03-1cc6a2b15f91" containerName="glance-log" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.063044 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="08fcbea0-ddb9-4268-a9fc-a863e59f8c22" containerName="glance-httpd" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.063059 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="08fcbea0-ddb9-4268-a9fc-a863e59f8c22" containerName="glance-log" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.063066 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce8c3f23-4314-490d-9fa5-0754abe083a1" containerName="mariadb-database-create" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.064096 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.076051 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.076988 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.078569 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.099810 4925 scope.go:117] "RemoveContainer" containerID="b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.141626 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.154934 4925 scope.go:117] "RemoveContainer" containerID="c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3" Feb 02 11:47:43 crc kubenswrapper[4925]: E0202 11:47:43.158212 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3\": container with ID starting with c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3 not found: ID does not exist" containerID="c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.158264 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3"} err="failed to get container status \"c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3\": rpc error: code = NotFound desc = could not find container \"c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3\": container with ID starting with c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3 not found: ID does not exist" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.158291 4925 scope.go:117] "RemoveContainer" containerID="b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413" Feb 02 11:47:43 crc kubenswrapper[4925]: E0202 11:47:43.162209 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413\": container with ID starting with b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413 not found: ID does not exist" containerID="b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.162249 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413"} err="failed to get container status \"b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413\": rpc error: code = NotFound desc = could not find container \"b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413\": container with ID starting with b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413 not found: ID does not exist" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.162276 4925 scope.go:117] "RemoveContainer" containerID="c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.166341 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3"} err="failed to get container status \"c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3\": rpc error: code = NotFound desc = could not find container \"c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3\": container with ID starting with c68cd55cd38d24412803db9c3d1be7b57bd243b64c0283c9396bfcdd67eb62d3 not found: ID does not exist" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.166397 4925 scope.go:117] "RemoveContainer" containerID="b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.167182 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413"} err="failed to get container status \"b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413\": rpc error: code = NotFound desc = could not find container \"b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413\": container with ID starting with b4bb432858536e3c5318179c8dea37614782e000787b5b5ea81dd258aedb6413 not found: ID does not exist" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.178844 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.188116 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.190391 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.195061 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.197654 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.202404 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.218299 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99d0cf5b-0a90-49c5-8302-4401070f1c3c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.218702 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99d0cf5b-0a90-49c5-8302-4401070f1c3c-logs\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.218729 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99d0cf5b-0a90-49c5-8302-4401070f1c3c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.218893 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.218937 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/99d0cf5b-0a90-49c5-8302-4401070f1c3c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.218957 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99d0cf5b-0a90-49c5-8302-4401070f1c3c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.219055 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjwsg\" (UniqueName: \"kubernetes.io/projected/99d0cf5b-0a90-49c5-8302-4401070f1c3c-kube-api-access-pjwsg\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.219175 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99d0cf5b-0a90-49c5-8302-4401070f1c3c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.219220 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/99d0cf5b-0a90-49c5-8302-4401070f1c3c-ceph\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.321660 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.321730 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.321768 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.321822 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.321857 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/99d0cf5b-0a90-49c5-8302-4401070f1c3c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.321880 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99d0cf5b-0a90-49c5-8302-4401070f1c3c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.321952 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.321986 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjwsg\" (UniqueName: \"kubernetes.io/projected/99d0cf5b-0a90-49c5-8302-4401070f1c3c-kube-api-access-pjwsg\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.322019 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.322067 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-logs\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.322133 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99d0cf5b-0a90-49c5-8302-4401070f1c3c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.322179 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/99d0cf5b-0a90-49c5-8302-4401070f1c3c-ceph\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.322232 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.322335 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-ceph\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.322370 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99d0cf5b-0a90-49c5-8302-4401070f1c3c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.322408 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjmw2\" (UniqueName: \"kubernetes.io/projected/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-kube-api-access-tjmw2\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.322467 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99d0cf5b-0a90-49c5-8302-4401070f1c3c-logs\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.322495 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99d0cf5b-0a90-49c5-8302-4401070f1c3c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.323786 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/99d0cf5b-0a90-49c5-8302-4401070f1c3c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.324256 4925 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.324525 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99d0cf5b-0a90-49c5-8302-4401070f1c3c-logs\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.330319 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/99d0cf5b-0a90-49c5-8302-4401070f1c3c-ceph\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.332276 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99d0cf5b-0a90-49c5-8302-4401070f1c3c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.332992 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99d0cf5b-0a90-49c5-8302-4401070f1c3c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.335441 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99d0cf5b-0a90-49c5-8302-4401070f1c3c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.342956 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99d0cf5b-0a90-49c5-8302-4401070f1c3c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.349247 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjwsg\" (UniqueName: \"kubernetes.io/projected/99d0cf5b-0a90-49c5-8302-4401070f1c3c-kube-api-access-pjwsg\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.387560 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"99d0cf5b-0a90-49c5-8302-4401070f1c3c\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.400024 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.403435 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.403509 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.403559 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.406905 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.406991 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" gracePeriod=600 Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.424675 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-ceph\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.424742 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjmw2\" (UniqueName: \"kubernetes.io/projected/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-kube-api-access-tjmw2\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.424838 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.424876 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.424914 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.424977 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.425005 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.425038 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-logs\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.425121 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.425537 4925 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.433746 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.434031 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.437274 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-logs\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.440000 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.443782 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-ceph\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.445393 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.449798 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.461394 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjmw2\" (UniqueName: \"kubernetes.io/projected/29d38bf8-6523-4fe2-9fb9-7385f5ea31bf-kube-api-access-tjmw2\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.475265 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf\") " pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.527625 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:47:43 crc kubenswrapper[4925]: E0202 11:47:43.602810 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.870006 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" exitCode=0 Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.870121 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44"} Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.870154 4925 scope.go:117] "RemoveContainer" containerID="40cf3ac2a0ac9b7206f6541b854f2b61cc2451fe97ce5ef5864cb5666fd27668" Feb 02 11:47:43 crc kubenswrapper[4925]: I0202 11:47:43.870718 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:47:43 crc kubenswrapper[4925]: E0202 11:47:43.870942 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:47:44 crc kubenswrapper[4925]: I0202 11:47:44.156631 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:47:44 crc kubenswrapper[4925]: I0202 11:47:44.321363 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:47:44 crc kubenswrapper[4925]: I0202 11:47:44.688934 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08fcbea0-ddb9-4268-a9fc-a863e59f8c22" path="/var/lib/kubelet/pods/08fcbea0-ddb9-4268-a9fc-a863e59f8c22/volumes" Feb 02 11:47:44 crc kubenswrapper[4925]: I0202 11:47:44.691547 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bccddf7-0f96-4606-8a03-1cc6a2b15f91" path="/var/lib/kubelet/pods/4bccddf7-0f96-4606-8a03-1cc6a2b15f91/volumes" Feb 02 11:47:44 crc kubenswrapper[4925]: I0202 11:47:44.883979 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"99d0cf5b-0a90-49c5-8302-4401070f1c3c","Type":"ContainerStarted","Data":"ab8bcedf3ac4101e16ee0a0a8750e153ad52fa0de0cd829047d843e73d398184"} Feb 02 11:47:44 crc kubenswrapper[4925]: I0202 11:47:44.887857 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf","Type":"ContainerStarted","Data":"45704aa83334e95c5a11dfaefe382bf703a65d689009b7796077a27e66f83dbb"} Feb 02 11:47:45 crc kubenswrapper[4925]: I0202 11:47:45.899403 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf","Type":"ContainerStarted","Data":"e26ade7768bdc7d79b58b01b473afabab51ef7437df94cf7de812e3f22a20ad0"} Feb 02 11:47:45 crc kubenswrapper[4925]: I0202 11:47:45.901776 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"99d0cf5b-0a90-49c5-8302-4401070f1c3c","Type":"ContainerStarted","Data":"da3f9dc04cbe4b0480e9566218ca02c42a755357f9081045359cec258b3d5172"} Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.036596 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.147916 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.633009 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-fnp7t"] Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.634359 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-fnp7t" Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.641565 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-g9rnx" Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.641837 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.647845 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-fnp7t"] Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.713277 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzr7\" (UniqueName: \"kubernetes.io/projected/2b317877-90ba-4149-aa93-a74664be058d-kube-api-access-zvzr7\") pod \"manila-db-sync-fnp7t\" (UID: \"2b317877-90ba-4149-aa93-a74664be058d\") " pod="openstack/manila-db-sync-fnp7t" Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.713391 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-config-data\") pod \"manila-db-sync-fnp7t\" (UID: \"2b317877-90ba-4149-aa93-a74664be058d\") " pod="openstack/manila-db-sync-fnp7t" Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.713514 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-job-config-data\") pod \"manila-db-sync-fnp7t\" (UID: \"2b317877-90ba-4149-aa93-a74664be058d\") " pod="openstack/manila-db-sync-fnp7t" Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.713546 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-combined-ca-bundle\") pod \"manila-db-sync-fnp7t\" (UID: \"2b317877-90ba-4149-aa93-a74664be058d\") " pod="openstack/manila-db-sync-fnp7t" Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.815328 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-job-config-data\") pod \"manila-db-sync-fnp7t\" (UID: \"2b317877-90ba-4149-aa93-a74664be058d\") " pod="openstack/manila-db-sync-fnp7t" Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.815388 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-combined-ca-bundle\") pod \"manila-db-sync-fnp7t\" (UID: \"2b317877-90ba-4149-aa93-a74664be058d\") " pod="openstack/manila-db-sync-fnp7t" Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.815448 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzr7\" (UniqueName: \"kubernetes.io/projected/2b317877-90ba-4149-aa93-a74664be058d-kube-api-access-zvzr7\") pod \"manila-db-sync-fnp7t\" (UID: \"2b317877-90ba-4149-aa93-a74664be058d\") " pod="openstack/manila-db-sync-fnp7t" Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.815490 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-config-data\") pod \"manila-db-sync-fnp7t\" (UID: \"2b317877-90ba-4149-aa93-a74664be058d\") " pod="openstack/manila-db-sync-fnp7t" Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.826560 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-job-config-data\") pod \"manila-db-sync-fnp7t\" (UID: \"2b317877-90ba-4149-aa93-a74664be058d\") " pod="openstack/manila-db-sync-fnp7t" Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.828009 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-config-data\") pod \"manila-db-sync-fnp7t\" (UID: \"2b317877-90ba-4149-aa93-a74664be058d\") " pod="openstack/manila-db-sync-fnp7t" Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.830511 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-combined-ca-bundle\") pod \"manila-db-sync-fnp7t\" (UID: \"2b317877-90ba-4149-aa93-a74664be058d\") " pod="openstack/manila-db-sync-fnp7t" Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.848041 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzr7\" (UniqueName: \"kubernetes.io/projected/2b317877-90ba-4149-aa93-a74664be058d-kube-api-access-zvzr7\") pod \"manila-db-sync-fnp7t\" (UID: \"2b317877-90ba-4149-aa93-a74664be058d\") " pod="openstack/manila-db-sync-fnp7t" Feb 02 11:47:47 crc kubenswrapper[4925]: I0202 11:47:47.966231 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-fnp7t" Feb 02 11:47:48 crc kubenswrapper[4925]: I0202 11:47:48.286257 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w2j24"] Feb 02 11:47:48 crc kubenswrapper[4925]: I0202 11:47:48.288816 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2j24" Feb 02 11:47:48 crc kubenswrapper[4925]: I0202 11:47:48.302617 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w2j24"] Feb 02 11:47:48 crc kubenswrapper[4925]: I0202 11:47:48.452217 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa6b0698-6261-4b10-badd-2ea52206b8dd-utilities\") pod \"redhat-operators-w2j24\" (UID: \"aa6b0698-6261-4b10-badd-2ea52206b8dd\") " pod="openshift-marketplace/redhat-operators-w2j24" Feb 02 11:47:48 crc kubenswrapper[4925]: I0202 11:47:48.452327 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa6b0698-6261-4b10-badd-2ea52206b8dd-catalog-content\") pod \"redhat-operators-w2j24\" (UID: \"aa6b0698-6261-4b10-badd-2ea52206b8dd\") " pod="openshift-marketplace/redhat-operators-w2j24" Feb 02 11:47:48 crc kubenswrapper[4925]: I0202 11:47:48.452355 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thzm6\" (UniqueName: \"kubernetes.io/projected/aa6b0698-6261-4b10-badd-2ea52206b8dd-kube-api-access-thzm6\") pod \"redhat-operators-w2j24\" (UID: \"aa6b0698-6261-4b10-badd-2ea52206b8dd\") " pod="openshift-marketplace/redhat-operators-w2j24" Feb 02 11:47:48 crc kubenswrapper[4925]: I0202 11:47:48.553793 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa6b0698-6261-4b10-badd-2ea52206b8dd-catalog-content\") pod \"redhat-operators-w2j24\" (UID: \"aa6b0698-6261-4b10-badd-2ea52206b8dd\") " pod="openshift-marketplace/redhat-operators-w2j24" Feb 02 11:47:48 crc kubenswrapper[4925]: I0202 11:47:48.553840 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thzm6\" (UniqueName: \"kubernetes.io/projected/aa6b0698-6261-4b10-badd-2ea52206b8dd-kube-api-access-thzm6\") pod \"redhat-operators-w2j24\" (UID: \"aa6b0698-6261-4b10-badd-2ea52206b8dd\") " pod="openshift-marketplace/redhat-operators-w2j24" Feb 02 11:47:48 crc kubenswrapper[4925]: I0202 11:47:48.553975 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa6b0698-6261-4b10-badd-2ea52206b8dd-utilities\") pod \"redhat-operators-w2j24\" (UID: \"aa6b0698-6261-4b10-badd-2ea52206b8dd\") " pod="openshift-marketplace/redhat-operators-w2j24" Feb 02 11:47:48 crc kubenswrapper[4925]: I0202 11:47:48.554602 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa6b0698-6261-4b10-badd-2ea52206b8dd-utilities\") pod \"redhat-operators-w2j24\" (UID: \"aa6b0698-6261-4b10-badd-2ea52206b8dd\") " pod="openshift-marketplace/redhat-operators-w2j24" Feb 02 11:47:48 crc kubenswrapper[4925]: I0202 11:47:48.554603 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa6b0698-6261-4b10-badd-2ea52206b8dd-catalog-content\") pod \"redhat-operators-w2j24\" (UID: \"aa6b0698-6261-4b10-badd-2ea52206b8dd\") " pod="openshift-marketplace/redhat-operators-w2j24" Feb 02 11:47:48 crc kubenswrapper[4925]: I0202 11:47:48.588796 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thzm6\" (UniqueName: \"kubernetes.io/projected/aa6b0698-6261-4b10-badd-2ea52206b8dd-kube-api-access-thzm6\") pod \"redhat-operators-w2j24\" (UID: \"aa6b0698-6261-4b10-badd-2ea52206b8dd\") " pod="openshift-marketplace/redhat-operators-w2j24" Feb 02 11:47:48 crc kubenswrapper[4925]: I0202 11:47:48.615687 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2j24" Feb 02 11:47:51 crc kubenswrapper[4925]: I0202 11:47:51.079359 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w2j24"] Feb 02 11:47:51 crc kubenswrapper[4925]: W0202 11:47:51.101849 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa6b0698_6261_4b10_badd_2ea52206b8dd.slice/crio-f67da9398dbf6ed673879d45164c6a046ba827a0813110e644b3808750789129 WatchSource:0}: Error finding container f67da9398dbf6ed673879d45164c6a046ba827a0813110e644b3808750789129: Status 404 returned error can't find the container with id f67da9398dbf6ed673879d45164c6a046ba827a0813110e644b3808750789129 Feb 02 11:47:51 crc kubenswrapper[4925]: I0202 11:47:51.167043 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-fnp7t"] Feb 02 11:47:51 crc kubenswrapper[4925]: W0202 11:47:51.217492 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b317877_90ba_4149_aa93_a74664be058d.slice/crio-8eaf42f568fe7458ea76aaf8f3648d8d5fb40b527e5ce18d557ae42512607108 WatchSource:0}: Error finding container 8eaf42f568fe7458ea76aaf8f3648d8d5fb40b527e5ce18d557ae42512607108: Status 404 returned error can't find the container with id 8eaf42f568fe7458ea76aaf8f3648d8d5fb40b527e5ce18d557ae42512607108 Feb 02 11:47:51 crc kubenswrapper[4925]: I0202 11:47:51.973984 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5977b94d89-jwxrq" event={"ID":"6b6bd22b-c4b9-407f-993c-4132ca172b06","Type":"ContainerStarted","Data":"0811e307a0adbe2450e984038467b742e6ee1d31a445ecec67ab18f02e8ee1a7"} Feb 02 11:47:51 crc kubenswrapper[4925]: I0202 11:47:51.974378 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5977b94d89-jwxrq" event={"ID":"6b6bd22b-c4b9-407f-993c-4132ca172b06","Type":"ContainerStarted","Data":"740a8487a4456f778e7daa3cf6d81d35c2d724bbf137bd17dbc4cc68dd1643a0"} Feb 02 11:47:51 crc kubenswrapper[4925]: I0202 11:47:51.974146 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5977b94d89-jwxrq" podUID="6b6bd22b-c4b9-407f-993c-4132ca172b06" containerName="horizon" containerID="cri-o://0811e307a0adbe2450e984038467b742e6ee1d31a445ecec67ab18f02e8ee1a7" gracePeriod=30 Feb 02 11:47:51 crc kubenswrapper[4925]: I0202 11:47:51.974117 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5977b94d89-jwxrq" podUID="6b6bd22b-c4b9-407f-993c-4132ca172b06" containerName="horizon-log" containerID="cri-o://740a8487a4456f778e7daa3cf6d81d35c2d724bbf137bd17dbc4cc68dd1643a0" gracePeriod=30 Feb 02 11:47:51 crc kubenswrapper[4925]: I0202 11:47:51.978691 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cd59fb695-b7252" event={"ID":"2165d1e8-2b5c-4b4d-b55f-d2280523c022","Type":"ContainerStarted","Data":"5c717323a1ee39e8f9364d15b230ecd1883a4ee0f9eb10bbaefcd7a8e6fbba16"} Feb 02 11:47:51 crc kubenswrapper[4925]: I0202 11:47:51.978722 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cd59fb695-b7252" podUID="2165d1e8-2b5c-4b4d-b55f-d2280523c022" containerName="horizon-log" containerID="cri-o://a190391bd1cd88e671dc4253974a39581aa4023ad5d6553a83ab852eb3015765" gracePeriod=30 Feb 02 11:47:51 crc kubenswrapper[4925]: I0202 11:47:51.978758 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cd59fb695-b7252" event={"ID":"2165d1e8-2b5c-4b4d-b55f-d2280523c022","Type":"ContainerStarted","Data":"a190391bd1cd88e671dc4253974a39581aa4023ad5d6553a83ab852eb3015765"} Feb 02 11:47:51 crc kubenswrapper[4925]: I0202 11:47:51.978819 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cd59fb695-b7252" podUID="2165d1e8-2b5c-4b4d-b55f-d2280523c022" containerName="horizon" containerID="cri-o://5c717323a1ee39e8f9364d15b230ecd1883a4ee0f9eb10bbaefcd7a8e6fbba16" gracePeriod=30 Feb 02 11:47:51 crc kubenswrapper[4925]: I0202 11:47:51.985491 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c6d58558b-gh6c8" event={"ID":"acc24fd1-e3f5-4235-9190-c9aad51e4282","Type":"ContainerStarted","Data":"fbfeb2a0d3a71cec9f5cd071a133b9b7444bcc380e5e9f04a3b6268111543b82"} Feb 02 11:47:51 crc kubenswrapper[4925]: I0202 11:47:51.985558 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c6d58558b-gh6c8" event={"ID":"acc24fd1-e3f5-4235-9190-c9aad51e4282","Type":"ContainerStarted","Data":"dc9bd7680ec888ec2058d398559fe018c4e5fc49da607fe6fd0f2275a73880fe"} Feb 02 11:47:51 crc kubenswrapper[4925]: I0202 11:47:51.989348 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96c5cb844-xrpsd" event={"ID":"1315a531-ca20-494e-9273-dfa832b62744","Type":"ContainerStarted","Data":"2a2caab84eef03021831e24f792ffa3c52397dade541be444489851fc252f99d"} Feb 02 11:47:51 crc kubenswrapper[4925]: I0202 11:47:51.989384 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96c5cb844-xrpsd" event={"ID":"1315a531-ca20-494e-9273-dfa832b62744","Type":"ContainerStarted","Data":"4cf5134ea8bc9226a2250703fc585cd09993a5838777b5b7aaf3a00bf1bdcb24"} Feb 02 11:47:51 crc kubenswrapper[4925]: I0202 11:47:51.994438 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"99d0cf5b-0a90-49c5-8302-4401070f1c3c","Type":"ContainerStarted","Data":"e50309af87129e642fc3e4eb5abb6ca8d958afece3c560b463f16fb7748bd323"} Feb 02 11:47:52 crc kubenswrapper[4925]: I0202 11:47:51.999015 4925 generic.go:334] "Generic (PLEG): container finished" podID="aa6b0698-6261-4b10-badd-2ea52206b8dd" containerID="d933e03bad5a5a169ee0eee01aeeb017557c47df4a9c5c11c44a6fd63e7ecd18" exitCode=0 Feb 02 11:47:52 crc kubenswrapper[4925]: I0202 11:47:51.999139 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2j24" event={"ID":"aa6b0698-6261-4b10-badd-2ea52206b8dd","Type":"ContainerDied","Data":"d933e03bad5a5a169ee0eee01aeeb017557c47df4a9c5c11c44a6fd63e7ecd18"} Feb 02 11:47:52 crc kubenswrapper[4925]: I0202 11:47:52.001399 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2j24" event={"ID":"aa6b0698-6261-4b10-badd-2ea52206b8dd","Type":"ContainerStarted","Data":"f67da9398dbf6ed673879d45164c6a046ba827a0813110e644b3808750789129"} Feb 02 11:47:52 crc kubenswrapper[4925]: I0202 11:47:52.003427 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5977b94d89-jwxrq" podStartSLOduration=2.75275983 podStartE2EDuration="15.003409345s" podCreationTimestamp="2026-02-02 11:47:37 +0000 UTC" firstStartedPulling="2026-02-02 11:47:38.478879069 +0000 UTC m=+3035.483128031" lastFinishedPulling="2026-02-02 11:47:50.729528584 +0000 UTC m=+3047.733777546" observedRunningTime="2026-02-02 11:47:51.99955361 +0000 UTC m=+3049.003802582" watchObservedRunningTime="2026-02-02 11:47:52.003409345 +0000 UTC m=+3049.007658297" Feb 02 11:47:52 crc kubenswrapper[4925]: I0202 11:47:52.005047 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-fnp7t" event={"ID":"2b317877-90ba-4149-aa93-a74664be058d","Type":"ContainerStarted","Data":"8eaf42f568fe7458ea76aaf8f3648d8d5fb40b527e5ce18d557ae42512607108"} Feb 02 11:47:52 crc kubenswrapper[4925]: I0202 11:47:52.028182 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-96c5cb844-xrpsd" podStartSLOduration=3.007297777 podStartE2EDuration="12.028155247s" podCreationTimestamp="2026-02-02 11:47:40 +0000 UTC" firstStartedPulling="2026-02-02 11:47:41.708704725 +0000 UTC m=+3038.712953687" lastFinishedPulling="2026-02-02 11:47:50.729562195 +0000 UTC m=+3047.733811157" observedRunningTime="2026-02-02 11:47:52.022109362 +0000 UTC m=+3049.026358354" watchObservedRunningTime="2026-02-02 11:47:52.028155247 +0000 UTC m=+3049.032404209" Feb 02 11:47:52 crc kubenswrapper[4925]: I0202 11:47:52.029428 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"29d38bf8-6523-4fe2-9fb9-7385f5ea31bf","Type":"ContainerStarted","Data":"102652896b5cabd993f05887f13e0b8adccc7300888fcb992bb239cb90be1eda"} Feb 02 11:47:52 crc kubenswrapper[4925]: I0202 11:47:52.091340 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-c6d58558b-gh6c8" podStartSLOduration=2.848222689 podStartE2EDuration="12.091313461s" podCreationTimestamp="2026-02-02 11:47:40 +0000 UTC" firstStartedPulling="2026-02-02 11:47:41.4878504 +0000 UTC m=+3038.492099362" lastFinishedPulling="2026-02-02 11:47:50.730941172 +0000 UTC m=+3047.735190134" observedRunningTime="2026-02-02 11:47:52.059454746 +0000 UTC m=+3049.063703718" watchObservedRunningTime="2026-02-02 11:47:52.091313461 +0000 UTC m=+3049.095562423" Feb 02 11:47:52 crc kubenswrapper[4925]: I0202 11:47:52.094394 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.094380534 podStartE2EDuration="10.094380534s" podCreationTimestamp="2026-02-02 11:47:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:47:52.085266287 +0000 UTC m=+3049.089515259" watchObservedRunningTime="2026-02-02 11:47:52.094380534 +0000 UTC m=+3049.098629496" Feb 02 11:47:52 crc kubenswrapper[4925]: I0202 11:47:52.111753 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6cd59fb695-b7252" podStartSLOduration=2.881125034 podStartE2EDuration="15.111733475s" podCreationTimestamp="2026-02-02 11:47:37 +0000 UTC" firstStartedPulling="2026-02-02 11:47:38.484289846 +0000 UTC m=+3035.488538818" lastFinishedPulling="2026-02-02 11:47:50.714898297 +0000 UTC m=+3047.719147259" observedRunningTime="2026-02-02 11:47:52.102850064 +0000 UTC m=+3049.107099036" watchObservedRunningTime="2026-02-02 11:47:52.111733475 +0000 UTC m=+3049.115982437" Feb 02 11:47:52 crc kubenswrapper[4925]: I0202 11:47:52.155258 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.155239546 podStartE2EDuration="9.155239546s" podCreationTimestamp="2026-02-02 11:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:47:52.152120332 +0000 UTC m=+3049.156369314" watchObservedRunningTime="2026-02-02 11:47:52.155239546 +0000 UTC m=+3049.159488508" Feb 02 11:47:53 crc kubenswrapper[4925]: I0202 11:47:53.401101 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 11:47:53 crc kubenswrapper[4925]: I0202 11:47:53.401568 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 11:47:53 crc kubenswrapper[4925]: I0202 11:47:53.441918 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 11:47:53 crc kubenswrapper[4925]: I0202 11:47:53.454031 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 11:47:53 crc kubenswrapper[4925]: I0202 11:47:53.528551 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 11:47:53 crc kubenswrapper[4925]: I0202 11:47:53.528604 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 11:47:53 crc kubenswrapper[4925]: I0202 11:47:53.570805 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 11:47:53 crc kubenswrapper[4925]: I0202 11:47:53.579995 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 11:47:54 crc kubenswrapper[4925]: I0202 11:47:54.050976 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2j24" event={"ID":"aa6b0698-6261-4b10-badd-2ea52206b8dd","Type":"ContainerStarted","Data":"f31a9c3148a3321599e74abe3ff2713c8344ce065c8637eff7da438b94651f6f"} Feb 02 11:47:54 crc kubenswrapper[4925]: I0202 11:47:54.053587 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 11:47:54 crc kubenswrapper[4925]: I0202 11:47:54.053624 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 11:47:54 crc kubenswrapper[4925]: I0202 11:47:54.054547 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 11:47:54 crc kubenswrapper[4925]: I0202 11:47:54.054610 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 11:47:55 crc kubenswrapper[4925]: I0202 11:47:55.665100 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:47:55 crc kubenswrapper[4925]: E0202 11:47:55.665922 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:47:56 crc kubenswrapper[4925]: I0202 11:47:56.073280 4925 generic.go:334] "Generic (PLEG): container finished" podID="aa6b0698-6261-4b10-badd-2ea52206b8dd" containerID="f31a9c3148a3321599e74abe3ff2713c8344ce065c8637eff7da438b94651f6f" exitCode=0 Feb 02 11:47:56 crc kubenswrapper[4925]: I0202 11:47:56.073325 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2j24" event={"ID":"aa6b0698-6261-4b10-badd-2ea52206b8dd","Type":"ContainerDied","Data":"f31a9c3148a3321599e74abe3ff2713c8344ce065c8637eff7da438b94651f6f"} Feb 02 11:47:57 crc kubenswrapper[4925]: I0202 11:47:57.550836 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:47:57 crc kubenswrapper[4925]: I0202 11:47:57.691632 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:48:00 crc kubenswrapper[4925]: I0202 11:48:00.107362 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2j24" event={"ID":"aa6b0698-6261-4b10-badd-2ea52206b8dd","Type":"ContainerStarted","Data":"92b0d72bc75e300663c75297d339a0f33d44313e85c8787d8a52d945289fef27"} Feb 02 11:48:00 crc kubenswrapper[4925]: I0202 11:48:00.110256 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-fnp7t" event={"ID":"2b317877-90ba-4149-aa93-a74664be058d","Type":"ContainerStarted","Data":"d4b39a28ad2b0d50c3909d23cec1f2fd2ec523de942218fe4ebd70a91efdd2ae"} Feb 02 11:48:00 crc kubenswrapper[4925]: I0202 11:48:00.131719 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w2j24" podStartSLOduration=4.894364101 podStartE2EDuration="12.131695524s" podCreationTimestamp="2026-02-02 11:47:48 +0000 UTC" firstStartedPulling="2026-02-02 11:47:52.005030699 +0000 UTC m=+3049.009279661" lastFinishedPulling="2026-02-02 11:47:59.242362122 +0000 UTC m=+3056.246611084" observedRunningTime="2026-02-02 11:48:00.124868239 +0000 UTC m=+3057.129117201" watchObservedRunningTime="2026-02-02 11:48:00.131695524 +0000 UTC m=+3057.135944486" Feb 02 11:48:00 crc kubenswrapper[4925]: I0202 11:48:00.599231 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:48:00 crc kubenswrapper[4925]: I0202 11:48:00.599598 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:48:00 crc kubenswrapper[4925]: I0202 11:48:00.804266 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:48:00 crc kubenswrapper[4925]: I0202 11:48:00.804345 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:48:01 crc kubenswrapper[4925]: I0202 11:48:01.074511 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 11:48:01 crc kubenswrapper[4925]: I0202 11:48:01.100992 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-fnp7t" podStartSLOduration=6.083052383 podStartE2EDuration="14.100970696s" podCreationTimestamp="2026-02-02 11:47:47 +0000 UTC" firstStartedPulling="2026-02-02 11:47:51.222098415 +0000 UTC m=+3048.226347377" lastFinishedPulling="2026-02-02 11:47:59.240016728 +0000 UTC m=+3056.244265690" observedRunningTime="2026-02-02 11:48:00.154882164 +0000 UTC m=+3057.159131116" watchObservedRunningTime="2026-02-02 11:48:01.100970696 +0000 UTC m=+3058.105219658" Feb 02 11:48:01 crc kubenswrapper[4925]: I0202 11:48:01.245690 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 11:48:03 crc kubenswrapper[4925]: I0202 11:48:03.408130 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 11:48:03 crc kubenswrapper[4925]: I0202 11:48:03.422141 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 11:48:06 crc kubenswrapper[4925]: I0202 11:48:06.663976 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:48:06 crc kubenswrapper[4925]: E0202 11:48:06.664706 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:48:08 crc kubenswrapper[4925]: I0202 11:48:08.616819 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w2j24" Feb 02 11:48:08 crc kubenswrapper[4925]: I0202 11:48:08.617188 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w2j24" Feb 02 11:48:09 crc kubenswrapper[4925]: I0202 11:48:09.669314 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w2j24" podUID="aa6b0698-6261-4b10-badd-2ea52206b8dd" containerName="registry-server" probeResult="failure" output=< Feb 02 11:48:09 crc kubenswrapper[4925]: timeout: failed to connect service ":50051" within 1s Feb 02 11:48:09 crc kubenswrapper[4925]: > Feb 02 11:48:10 crc kubenswrapper[4925]: I0202 11:48:10.609052 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c6d58558b-gh6c8" podUID="acc24fd1-e3f5-4235-9190-c9aad51e4282" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.245:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.245:8443: connect: connection refused" Feb 02 11:48:10 crc kubenswrapper[4925]: I0202 11:48:10.807799 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-96c5cb844-xrpsd" podUID="1315a531-ca20-494e-9273-dfa832b62744" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Feb 02 11:48:12 crc kubenswrapper[4925]: I0202 11:48:12.220487 4925 generic.go:334] "Generic (PLEG): container finished" podID="2b317877-90ba-4149-aa93-a74664be058d" containerID="d4b39a28ad2b0d50c3909d23cec1f2fd2ec523de942218fe4ebd70a91efdd2ae" exitCode=0 Feb 02 11:48:12 crc kubenswrapper[4925]: I0202 11:48:12.220531 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-fnp7t" event={"ID":"2b317877-90ba-4149-aa93-a74664be058d","Type":"ContainerDied","Data":"d4b39a28ad2b0d50c3909d23cec1f2fd2ec523de942218fe4ebd70a91efdd2ae"} Feb 02 11:48:13 crc kubenswrapper[4925]: I0202 11:48:13.692881 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-fnp7t" Feb 02 11:48:13 crc kubenswrapper[4925]: I0202 11:48:13.804570 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-config-data\") pod \"2b317877-90ba-4149-aa93-a74664be058d\" (UID: \"2b317877-90ba-4149-aa93-a74664be058d\") " Feb 02 11:48:13 crc kubenswrapper[4925]: I0202 11:48:13.804648 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvzr7\" (UniqueName: \"kubernetes.io/projected/2b317877-90ba-4149-aa93-a74664be058d-kube-api-access-zvzr7\") pod \"2b317877-90ba-4149-aa93-a74664be058d\" (UID: \"2b317877-90ba-4149-aa93-a74664be058d\") " Feb 02 11:48:13 crc kubenswrapper[4925]: I0202 11:48:13.804678 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-combined-ca-bundle\") pod \"2b317877-90ba-4149-aa93-a74664be058d\" (UID: \"2b317877-90ba-4149-aa93-a74664be058d\") " Feb 02 11:48:13 crc kubenswrapper[4925]: I0202 11:48:13.804869 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-job-config-data\") pod \"2b317877-90ba-4149-aa93-a74664be058d\" (UID: \"2b317877-90ba-4149-aa93-a74664be058d\") " Feb 02 11:48:13 crc kubenswrapper[4925]: I0202 11:48:13.811204 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "2b317877-90ba-4149-aa93-a74664be058d" (UID: "2b317877-90ba-4149-aa93-a74664be058d"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:13 crc kubenswrapper[4925]: I0202 11:48:13.813952 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-config-data" (OuterVolumeSpecName: "config-data") pod "2b317877-90ba-4149-aa93-a74664be058d" (UID: "2b317877-90ba-4149-aa93-a74664be058d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:13 crc kubenswrapper[4925]: I0202 11:48:13.823319 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b317877-90ba-4149-aa93-a74664be058d-kube-api-access-zvzr7" (OuterVolumeSpecName: "kube-api-access-zvzr7") pod "2b317877-90ba-4149-aa93-a74664be058d" (UID: "2b317877-90ba-4149-aa93-a74664be058d"). InnerVolumeSpecName "kube-api-access-zvzr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:48:13 crc kubenswrapper[4925]: I0202 11:48:13.847097 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b317877-90ba-4149-aa93-a74664be058d" (UID: "2b317877-90ba-4149-aa93-a74664be058d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:13 crc kubenswrapper[4925]: I0202 11:48:13.907977 4925 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-job-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:13 crc kubenswrapper[4925]: I0202 11:48:13.908027 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:13 crc kubenswrapper[4925]: I0202 11:48:13.908048 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvzr7\" (UniqueName: \"kubernetes.io/projected/2b317877-90ba-4149-aa93-a74664be058d-kube-api-access-zvzr7\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:13 crc kubenswrapper[4925]: I0202 11:48:13.908064 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b317877-90ba-4149-aa93-a74664be058d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.261248 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-fnp7t" event={"ID":"2b317877-90ba-4149-aa93-a74664be058d","Type":"ContainerDied","Data":"8eaf42f568fe7458ea76aaf8f3648d8d5fb40b527e5ce18d557ae42512607108"} Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.261715 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eaf42f568fe7458ea76aaf8f3648d8d5fb40b527e5ce18d557ae42512607108" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.261339 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-fnp7t" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.621809 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:48:14 crc kubenswrapper[4925]: E0202 11:48:14.622334 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b317877-90ba-4149-aa93-a74664be058d" containerName="manila-db-sync" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.622356 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b317877-90ba-4149-aa93-a74664be058d" containerName="manila-db-sync" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.622578 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b317877-90ba-4149-aa93-a74664be058d" containerName="manila-db-sync" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.623639 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.628598 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.628786 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.628901 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-g9rnx" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.631706 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.635466 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.640034 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.640747 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.709429 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.709485 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.729925 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-config-data\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.729991 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.730021 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk9v4\" (UniqueName: \"kubernetes.io/projected/0da3d601-adc6-40aa-9e21-697e239bdfa2-kube-api-access-sk9v4\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.730048 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.730109 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25dr7\" (UniqueName: \"kubernetes.io/projected/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-kube-api-access-25dr7\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.730154 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.730242 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-config-data\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.730262 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-ceph\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.730285 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-scripts\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.730322 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.730345 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-scripts\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.730367 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0da3d601-adc6-40aa-9e21-697e239bdfa2-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.730400 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.730430 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.783638 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-z5snj"] Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.786206 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.804614 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-z5snj"] Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.834426 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-config-data\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.834524 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.834552 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk9v4\" (UniqueName: \"kubernetes.io/projected/0da3d601-adc6-40aa-9e21-697e239bdfa2-kube-api-access-sk9v4\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.834574 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.834610 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25dr7\" (UniqueName: \"kubernetes.io/projected/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-kube-api-access-25dr7\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.834671 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.834800 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-config-data\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.834827 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-ceph\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.834848 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-scripts\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.834894 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-scripts\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.834915 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.834943 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0da3d601-adc6-40aa-9e21-697e239bdfa2-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.834972 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.834997 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.837183 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0da3d601-adc6-40aa-9e21-697e239bdfa2-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.837273 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.837827 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.841905 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.847201 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-scripts\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.847793 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-config-data\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.849240 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.850772 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-config-data\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.853698 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-scripts\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.859837 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-ceph\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.861479 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.867940 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.871722 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25dr7\" (UniqueName: \"kubernetes.io/projected/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-kube-api-access-25dr7\") pod \"manila-share-share1-0\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.877773 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk9v4\" (UniqueName: \"kubernetes.io/projected/0da3d601-adc6-40aa-9e21-697e239bdfa2-kube-api-access-sk9v4\") pod \"manila-scheduler-0\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.893208 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.895555 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.901944 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.905931 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.939911 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/151bbe9a-f79f-475b-88ad-1337e6ec9312-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.940000 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/151bbe9a-f79f-475b-88ad-1337e6ec9312-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.940045 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ww97\" (UniqueName: \"kubernetes.io/projected/151bbe9a-f79f-475b-88ad-1337e6ec9312-kube-api-access-6ww97\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.940136 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/151bbe9a-f79f-475b-88ad-1337e6ec9312-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.940193 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/151bbe9a-f79f-475b-88ad-1337e6ec9312-config\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.940330 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/151bbe9a-f79f-475b-88ad-1337e6ec9312-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.960501 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:48:14 crc kubenswrapper[4925]: I0202 11:48:14.985580 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.041720 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/151bbe9a-f79f-475b-88ad-1337e6ec9312-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.041766 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-config-data\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.041788 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42lhz\" (UniqueName: \"kubernetes.io/projected/4b01c514-a311-4ca8-bc05-2305a48eae5f-kube-api-access-42lhz\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.041818 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/151bbe9a-f79f-475b-88ad-1337e6ec9312-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.041837 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b01c514-a311-4ca8-bc05-2305a48eae5f-etc-machine-id\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.041858 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/151bbe9a-f79f-475b-88ad-1337e6ec9312-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.041879 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b01c514-a311-4ca8-bc05-2305a48eae5f-logs\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.041898 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-config-data-custom\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.041918 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ww97\" (UniqueName: \"kubernetes.io/projected/151bbe9a-f79f-475b-88ad-1337e6ec9312-kube-api-access-6ww97\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.041970 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.041993 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/151bbe9a-f79f-475b-88ad-1337e6ec9312-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.042099 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/151bbe9a-f79f-475b-88ad-1337e6ec9312-config\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.042135 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-scripts\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.043877 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/151bbe9a-f79f-475b-88ad-1337e6ec9312-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.045106 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/151bbe9a-f79f-475b-88ad-1337e6ec9312-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.049154 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/151bbe9a-f79f-475b-88ad-1337e6ec9312-config\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.049597 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/151bbe9a-f79f-475b-88ad-1337e6ec9312-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.050006 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/151bbe9a-f79f-475b-88ad-1337e6ec9312-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.075943 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ww97\" (UniqueName: \"kubernetes.io/projected/151bbe9a-f79f-475b-88ad-1337e6ec9312-kube-api-access-6ww97\") pod \"dnsmasq-dns-76b5fdb995-z5snj\" (UID: \"151bbe9a-f79f-475b-88ad-1337e6ec9312\") " pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.106663 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.149139 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42lhz\" (UniqueName: \"kubernetes.io/projected/4b01c514-a311-4ca8-bc05-2305a48eae5f-kube-api-access-42lhz\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.149554 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-config-data\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.149599 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b01c514-a311-4ca8-bc05-2305a48eae5f-etc-machine-id\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.149634 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b01c514-a311-4ca8-bc05-2305a48eae5f-logs\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.149657 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-config-data-custom\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.149712 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.149774 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-scripts\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.150617 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b01c514-a311-4ca8-bc05-2305a48eae5f-logs\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.151738 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b01c514-a311-4ca8-bc05-2305a48eae5f-etc-machine-id\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.158970 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-scripts\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.159660 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-config-data\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.163744 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-config-data-custom\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.184353 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42lhz\" (UniqueName: \"kubernetes.io/projected/4b01c514-a311-4ca8-bc05-2305a48eae5f-kube-api-access-42lhz\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.186798 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.273661 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.679108 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.738880 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:48:15 crc kubenswrapper[4925]: I0202 11:48:15.831825 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-z5snj"] Feb 02 11:48:16 crc kubenswrapper[4925]: I0202 11:48:16.073450 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:48:16 crc kubenswrapper[4925]: W0202 11:48:16.075886 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b01c514_a311_4ca8_bc05_2305a48eae5f.slice/crio-56929ebdcc8e80d4aa9ab3aad2b76c62b007d8ca212e68d3529555085fe957b4 WatchSource:0}: Error finding container 56929ebdcc8e80d4aa9ab3aad2b76c62b007d8ca212e68d3529555085fe957b4: Status 404 returned error can't find the container with id 56929ebdcc8e80d4aa9ab3aad2b76c62b007d8ca212e68d3529555085fe957b4 Feb 02 11:48:16 crc kubenswrapper[4925]: I0202 11:48:16.303250 4925 generic.go:334] "Generic (PLEG): container finished" podID="151bbe9a-f79f-475b-88ad-1337e6ec9312" containerID="ae07c1bbb078b0b58c0466b1864cabb690b9a3144603d14b649482c0f17ac93e" exitCode=0 Feb 02 11:48:16 crc kubenswrapper[4925]: I0202 11:48:16.303333 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" event={"ID":"151bbe9a-f79f-475b-88ad-1337e6ec9312","Type":"ContainerDied","Data":"ae07c1bbb078b0b58c0466b1864cabb690b9a3144603d14b649482c0f17ac93e"} Feb 02 11:48:16 crc kubenswrapper[4925]: I0202 11:48:16.303362 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" event={"ID":"151bbe9a-f79f-475b-88ad-1337e6ec9312","Type":"ContainerStarted","Data":"0cd8ddaf6ca02103a0bd278feb24c91ee9ade102887f998404689708474b5e6b"} Feb 02 11:48:16 crc kubenswrapper[4925]: I0202 11:48:16.332323 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"0da3d601-adc6-40aa-9e21-697e239bdfa2","Type":"ContainerStarted","Data":"55e088a5d6c3424ab8c02634916877cf4fb90c5c8e29351baafbc04748c2fdaf"} Feb 02 11:48:16 crc kubenswrapper[4925]: I0202 11:48:16.352172 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"4b01c514-a311-4ca8-bc05-2305a48eae5f","Type":"ContainerStarted","Data":"56929ebdcc8e80d4aa9ab3aad2b76c62b007d8ca212e68d3529555085fe957b4"} Feb 02 11:48:16 crc kubenswrapper[4925]: I0202 11:48:16.361832 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7","Type":"ContainerStarted","Data":"3bdc2e2e6340f052d794a4222ac5126f7385691b635e0e51e34004332c24e3c0"} Feb 02 11:48:17 crc kubenswrapper[4925]: I0202 11:48:17.383325 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" event={"ID":"151bbe9a-f79f-475b-88ad-1337e6ec9312","Type":"ContainerStarted","Data":"665df6ca13e253d2c9d68b221edd78de6044677d75702070b7b2d753274dec97"} Feb 02 11:48:17 crc kubenswrapper[4925]: I0202 11:48:17.384288 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:17 crc kubenswrapper[4925]: I0202 11:48:17.389345 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"0da3d601-adc6-40aa-9e21-697e239bdfa2","Type":"ContainerStarted","Data":"3c4d28f035f0267f243a8cc53fd6f99219e41ab37c20f4020ec589b5c65bf4c9"} Feb 02 11:48:17 crc kubenswrapper[4925]: I0202 11:48:17.394996 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"4b01c514-a311-4ca8-bc05-2305a48eae5f","Type":"ContainerStarted","Data":"a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca"} Feb 02 11:48:17 crc kubenswrapper[4925]: I0202 11:48:17.409947 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" podStartSLOduration=3.409926978 podStartE2EDuration="3.409926978s" podCreationTimestamp="2026-02-02 11:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:48:17.402571239 +0000 UTC m=+3074.406820201" watchObservedRunningTime="2026-02-02 11:48:17.409926978 +0000 UTC m=+3074.414175940" Feb 02 11:48:17 crc kubenswrapper[4925]: I0202 11:48:17.775195 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:48:18 crc kubenswrapper[4925]: I0202 11:48:18.410062 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"4b01c514-a311-4ca8-bc05-2305a48eae5f","Type":"ContainerStarted","Data":"9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726"} Feb 02 11:48:18 crc kubenswrapper[4925]: I0202 11:48:18.410506 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 02 11:48:18 crc kubenswrapper[4925]: I0202 11:48:18.413666 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"0da3d601-adc6-40aa-9e21-697e239bdfa2","Type":"ContainerStarted","Data":"60bc72c7c9c96111e91e98f6c17d3a91f12ad15e1d2f06485961c6a4196de0dc"} Feb 02 11:48:18 crc kubenswrapper[4925]: I0202 11:48:18.435964 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.435941281 podStartE2EDuration="4.435941281s" podCreationTimestamp="2026-02-02 11:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:48:18.432731304 +0000 UTC m=+3075.436980266" watchObservedRunningTime="2026-02-02 11:48:18.435941281 +0000 UTC m=+3075.440190243" Feb 02 11:48:18 crc kubenswrapper[4925]: I0202 11:48:18.465067 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.8332659 podStartE2EDuration="4.465046021s" podCreationTimestamp="2026-02-02 11:48:14 +0000 UTC" firstStartedPulling="2026-02-02 11:48:15.755521167 +0000 UTC m=+3072.759770129" lastFinishedPulling="2026-02-02 11:48:16.387301278 +0000 UTC m=+3073.391550250" observedRunningTime="2026-02-02 11:48:18.458355399 +0000 UTC m=+3075.462604371" watchObservedRunningTime="2026-02-02 11:48:18.465046021 +0000 UTC m=+3075.469294983" Feb 02 11:48:18 crc kubenswrapper[4925]: I0202 11:48:18.665401 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:48:18 crc kubenswrapper[4925]: E0202 11:48:18.665714 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:48:18 crc kubenswrapper[4925]: I0202 11:48:18.683006 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w2j24" Feb 02 11:48:18 crc kubenswrapper[4925]: I0202 11:48:18.738153 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w2j24" Feb 02 11:48:19 crc kubenswrapper[4925]: I0202 11:48:19.426649 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="4b01c514-a311-4ca8-bc05-2305a48eae5f" containerName="manila-api" containerID="cri-o://9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726" gracePeriod=30 Feb 02 11:48:19 crc kubenswrapper[4925]: I0202 11:48:19.427136 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="4b01c514-a311-4ca8-bc05-2305a48eae5f" containerName="manila-api-log" containerID="cri-o://a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca" gracePeriod=30 Feb 02 11:48:19 crc kubenswrapper[4925]: I0202 11:48:19.470636 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w2j24"] Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.142842 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.276765 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-scripts\") pod \"4b01c514-a311-4ca8-bc05-2305a48eae5f\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.276895 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-config-data\") pod \"4b01c514-a311-4ca8-bc05-2305a48eae5f\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.277023 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42lhz\" (UniqueName: \"kubernetes.io/projected/4b01c514-a311-4ca8-bc05-2305a48eae5f-kube-api-access-42lhz\") pod \"4b01c514-a311-4ca8-bc05-2305a48eae5f\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.277064 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b01c514-a311-4ca8-bc05-2305a48eae5f-etc-machine-id\") pod \"4b01c514-a311-4ca8-bc05-2305a48eae5f\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.277740 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-config-data-custom\") pod \"4b01c514-a311-4ca8-bc05-2305a48eae5f\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.277808 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b01c514-a311-4ca8-bc05-2305a48eae5f-logs\") pod \"4b01c514-a311-4ca8-bc05-2305a48eae5f\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.277843 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-combined-ca-bundle\") pod \"4b01c514-a311-4ca8-bc05-2305a48eae5f\" (UID: \"4b01c514-a311-4ca8-bc05-2305a48eae5f\") " Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.280248 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b01c514-a311-4ca8-bc05-2305a48eae5f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4b01c514-a311-4ca8-bc05-2305a48eae5f" (UID: "4b01c514-a311-4ca8-bc05-2305a48eae5f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.280546 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b01c514-a311-4ca8-bc05-2305a48eae5f-logs" (OuterVolumeSpecName: "logs") pod "4b01c514-a311-4ca8-bc05-2305a48eae5f" (UID: "4b01c514-a311-4ca8-bc05-2305a48eae5f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.288433 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-scripts" (OuterVolumeSpecName: "scripts") pod "4b01c514-a311-4ca8-bc05-2305a48eae5f" (UID: "4b01c514-a311-4ca8-bc05-2305a48eae5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.288485 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4b01c514-a311-4ca8-bc05-2305a48eae5f" (UID: "4b01c514-a311-4ca8-bc05-2305a48eae5f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.305549 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b01c514-a311-4ca8-bc05-2305a48eae5f-kube-api-access-42lhz" (OuterVolumeSpecName: "kube-api-access-42lhz") pod "4b01c514-a311-4ca8-bc05-2305a48eae5f" (UID: "4b01c514-a311-4ca8-bc05-2305a48eae5f"). InnerVolumeSpecName "kube-api-access-42lhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.312348 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b01c514-a311-4ca8-bc05-2305a48eae5f" (UID: "4b01c514-a311-4ca8-bc05-2305a48eae5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.336981 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-config-data" (OuterVolumeSpecName: "config-data") pod "4b01c514-a311-4ca8-bc05-2305a48eae5f" (UID: "4b01c514-a311-4ca8-bc05-2305a48eae5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.382275 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42lhz\" (UniqueName: \"kubernetes.io/projected/4b01c514-a311-4ca8-bc05-2305a48eae5f-kube-api-access-42lhz\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.382315 4925 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b01c514-a311-4ca8-bc05-2305a48eae5f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.382429 4925 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.382445 4925 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b01c514-a311-4ca8-bc05-2305a48eae5f-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.382459 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.382468 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.382477 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b01c514-a311-4ca8-bc05-2305a48eae5f-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.439653 4925 generic.go:334] "Generic (PLEG): container finished" podID="4b01c514-a311-4ca8-bc05-2305a48eae5f" containerID="9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726" exitCode=0 Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.439683 4925 generic.go:334] "Generic (PLEG): container finished" podID="4b01c514-a311-4ca8-bc05-2305a48eae5f" containerID="a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca" exitCode=143 Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.439921 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w2j24" podUID="aa6b0698-6261-4b10-badd-2ea52206b8dd" containerName="registry-server" containerID="cri-o://92b0d72bc75e300663c75297d339a0f33d44313e85c8787d8a52d945289fef27" gracePeriod=2 Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.441820 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.442223 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"4b01c514-a311-4ca8-bc05-2305a48eae5f","Type":"ContainerDied","Data":"9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726"} Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.442267 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"4b01c514-a311-4ca8-bc05-2305a48eae5f","Type":"ContainerDied","Data":"a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca"} Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.442279 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"4b01c514-a311-4ca8-bc05-2305a48eae5f","Type":"ContainerDied","Data":"56929ebdcc8e80d4aa9ab3aad2b76c62b007d8ca212e68d3529555085fe957b4"} Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.442295 4925 scope.go:117] "RemoveContainer" containerID="9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.492094 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.513823 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.526325 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 02 11:48:20 crc kubenswrapper[4925]: E0202 11:48:20.538020 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b01c514-a311-4ca8-bc05-2305a48eae5f" containerName="manila-api" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.538281 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b01c514-a311-4ca8-bc05-2305a48eae5f" containerName="manila-api" Feb 02 11:48:20 crc kubenswrapper[4925]: E0202 11:48:20.538389 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b01c514-a311-4ca8-bc05-2305a48eae5f" containerName="manila-api-log" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.538491 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b01c514-a311-4ca8-bc05-2305a48eae5f" containerName="manila-api-log" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.539064 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b01c514-a311-4ca8-bc05-2305a48eae5f" containerName="manila-api" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.539345 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b01c514-a311-4ca8-bc05-2305a48eae5f" containerName="manila-api-log" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.540458 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.540653 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.549598 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.549972 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.551963 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.578404 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.578669 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerName="ceilometer-central-agent" containerID="cri-o://bf4acea1d76a24c763dd57c8b8848f75fc99bfe52fd5e968bee4a7945895e282" gracePeriod=30 Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.581179 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerName="proxy-httpd" containerID="cri-o://2d0262c8ab7349e23b6b642e75f5d551f5697e295c074518a98d854a010f9dfa" gracePeriod=30 Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.581250 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerName="ceilometer-notification-agent" containerID="cri-o://a70c0517f26283b1aac165561f316a3dbd9462d251fc85201a911c8f27c6c6ad" gracePeriod=30 Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.581435 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerName="sg-core" containerID="cri-o://7cba57b1454de04b102aad779a5243e307f47a79b1c937b814177137d1c62b07" gracePeriod=30 Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.677993 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b01c514-a311-4ca8-bc05-2305a48eae5f" path="/var/lib/kubelet/pods/4b01c514-a311-4ca8-bc05-2305a48eae5f/volumes" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.689402 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-scripts\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.689460 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.694377 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmjgx\" (UniqueName: \"kubernetes.io/projected/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-kube-api-access-tmjgx\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.694552 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-config-data\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.694600 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-logs\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.694749 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-etc-machine-id\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.694905 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-config-data-custom\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.694961 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-internal-tls-certs\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.695006 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-public-tls-certs\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.797364 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-public-tls-certs\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.797413 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-scripts\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.797431 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.797464 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmjgx\" (UniqueName: \"kubernetes.io/projected/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-kube-api-access-tmjgx\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.797540 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-config-data\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.797561 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-logs\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.797616 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-etc-machine-id\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.797676 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-config-data-custom\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.797703 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-internal-tls-certs\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.799665 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-etc-machine-id\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.799828 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-logs\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.805287 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-config-data\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.806437 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-scripts\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.806747 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-config-data-custom\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.808629 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-internal-tls-certs\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.823804 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.824733 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-public-tls-certs\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.827243 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmjgx\" (UniqueName: \"kubernetes.io/projected/6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e-kube-api-access-tmjgx\") pod \"manila-api-0\" (UID: \"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e\") " pod="openstack/manila-api-0" Feb 02 11:48:20 crc kubenswrapper[4925]: I0202 11:48:20.939798 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:48:21 crc kubenswrapper[4925]: I0202 11:48:21.468701 4925 generic.go:334] "Generic (PLEG): container finished" podID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerID="2d0262c8ab7349e23b6b642e75f5d551f5697e295c074518a98d854a010f9dfa" exitCode=0 Feb 02 11:48:21 crc kubenswrapper[4925]: I0202 11:48:21.468998 4925 generic.go:334] "Generic (PLEG): container finished" podID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerID="7cba57b1454de04b102aad779a5243e307f47a79b1c937b814177137d1c62b07" exitCode=2 Feb 02 11:48:21 crc kubenswrapper[4925]: I0202 11:48:21.469013 4925 generic.go:334] "Generic (PLEG): container finished" podID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerID="bf4acea1d76a24c763dd57c8b8848f75fc99bfe52fd5e968bee4a7945895e282" exitCode=0 Feb 02 11:48:21 crc kubenswrapper[4925]: I0202 11:48:21.468812 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d","Type":"ContainerDied","Data":"2d0262c8ab7349e23b6b642e75f5d551f5697e295c074518a98d854a010f9dfa"} Feb 02 11:48:21 crc kubenswrapper[4925]: I0202 11:48:21.469110 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d","Type":"ContainerDied","Data":"7cba57b1454de04b102aad779a5243e307f47a79b1c937b814177137d1c62b07"} Feb 02 11:48:21 crc kubenswrapper[4925]: I0202 11:48:21.469126 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d","Type":"ContainerDied","Data":"bf4acea1d76a24c763dd57c8b8848f75fc99bfe52fd5e968bee4a7945895e282"} Feb 02 11:48:21 crc kubenswrapper[4925]: I0202 11:48:21.471312 4925 generic.go:334] "Generic (PLEG): container finished" podID="aa6b0698-6261-4b10-badd-2ea52206b8dd" containerID="92b0d72bc75e300663c75297d339a0f33d44313e85c8787d8a52d945289fef27" exitCode=0 Feb 02 11:48:21 crc kubenswrapper[4925]: I0202 11:48:21.471360 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2j24" event={"ID":"aa6b0698-6261-4b10-badd-2ea52206b8dd","Type":"ContainerDied","Data":"92b0d72bc75e300663c75297d339a0f33d44313e85c8787d8a52d945289fef27"} Feb 02 11:48:22 crc kubenswrapper[4925]: I0202 11:48:22.485887 4925 generic.go:334] "Generic (PLEG): container finished" podID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerID="a70c0517f26283b1aac165561f316a3dbd9462d251fc85201a911c8f27c6c6ad" exitCode=0 Feb 02 11:48:22 crc kubenswrapper[4925]: I0202 11:48:22.485974 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d","Type":"ContainerDied","Data":"a70c0517f26283b1aac165561f316a3dbd9462d251fc85201a911c8f27c6c6ad"} Feb 02 11:48:22 crc kubenswrapper[4925]: I0202 11:48:22.490918 4925 generic.go:334] "Generic (PLEG): container finished" podID="2165d1e8-2b5c-4b4d-b55f-d2280523c022" containerID="5c717323a1ee39e8f9364d15b230ecd1883a4ee0f9eb10bbaefcd7a8e6fbba16" exitCode=137 Feb 02 11:48:22 crc kubenswrapper[4925]: I0202 11:48:22.490958 4925 generic.go:334] "Generic (PLEG): container finished" podID="2165d1e8-2b5c-4b4d-b55f-d2280523c022" containerID="a190391bd1cd88e671dc4253974a39581aa4023ad5d6553a83ab852eb3015765" exitCode=137 Feb 02 11:48:22 crc kubenswrapper[4925]: I0202 11:48:22.490979 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cd59fb695-b7252" event={"ID":"2165d1e8-2b5c-4b4d-b55f-d2280523c022","Type":"ContainerDied","Data":"5c717323a1ee39e8f9364d15b230ecd1883a4ee0f9eb10bbaefcd7a8e6fbba16"} Feb 02 11:48:22 crc kubenswrapper[4925]: I0202 11:48:22.491028 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cd59fb695-b7252" event={"ID":"2165d1e8-2b5c-4b4d-b55f-d2280523c022","Type":"ContainerDied","Data":"a190391bd1cd88e671dc4253974a39581aa4023ad5d6553a83ab852eb3015765"} Feb 02 11:48:22 crc kubenswrapper[4925]: I0202 11:48:22.494699 4925 generic.go:334] "Generic (PLEG): container finished" podID="6b6bd22b-c4b9-407f-993c-4132ca172b06" containerID="0811e307a0adbe2450e984038467b742e6ee1d31a445ecec67ab18f02e8ee1a7" exitCode=137 Feb 02 11:48:22 crc kubenswrapper[4925]: I0202 11:48:22.494730 4925 generic.go:334] "Generic (PLEG): container finished" podID="6b6bd22b-c4b9-407f-993c-4132ca172b06" containerID="740a8487a4456f778e7daa3cf6d81d35c2d724bbf137bd17dbc4cc68dd1643a0" exitCode=137 Feb 02 11:48:22 crc kubenswrapper[4925]: I0202 11:48:22.494749 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5977b94d89-jwxrq" event={"ID":"6b6bd22b-c4b9-407f-993c-4132ca172b06","Type":"ContainerDied","Data":"0811e307a0adbe2450e984038467b742e6ee1d31a445ecec67ab18f02e8ee1a7"} Feb 02 11:48:22 crc kubenswrapper[4925]: I0202 11:48:22.494769 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5977b94d89-jwxrq" event={"ID":"6b6bd22b-c4b9-407f-993c-4132ca172b06","Type":"ContainerDied","Data":"740a8487a4456f778e7daa3cf6d81d35c2d724bbf137bd17dbc4cc68dd1643a0"} Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.429090 4925 scope.go:117] "RemoveContainer" containerID="a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.518411 4925 scope.go:117] "RemoveContainer" containerID="9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726" Feb 02 11:48:23 crc kubenswrapper[4925]: E0202 11:48:23.520312 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726\": container with ID starting with 9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726 not found: ID does not exist" containerID="9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.520347 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726"} err="failed to get container status \"9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726\": rpc error: code = NotFound desc = could not find container \"9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726\": container with ID starting with 9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726 not found: ID does not exist" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.520372 4925 scope.go:117] "RemoveContainer" containerID="a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca" Feb 02 11:48:23 crc kubenswrapper[4925]: E0202 11:48:23.521327 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca\": container with ID starting with a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca not found: ID does not exist" containerID="a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.521346 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca"} err="failed to get container status \"a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca\": rpc error: code = NotFound desc = could not find container \"a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca\": container with ID starting with a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca not found: ID does not exist" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.521359 4925 scope.go:117] "RemoveContainer" containerID="9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.527436 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726"} err="failed to get container status \"9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726\": rpc error: code = NotFound desc = could not find container \"9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726\": container with ID starting with 9994c09fb49ca38e47afa29a06e7471c22c544679ec1faba7cc35ef563f94726 not found: ID does not exist" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.527481 4925 scope.go:117] "RemoveContainer" containerID="a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.530962 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca"} err="failed to get container status \"a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca\": rpc error: code = NotFound desc = could not find container \"a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca\": container with ID starting with a83460953338f4d353ad6b5e4c85cea53bae0dc2eaa152f52eb5a30603d826ca not found: ID does not exist" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.583867 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2j24" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.723293 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.742530 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.771888 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa6b0698-6261-4b10-badd-2ea52206b8dd-catalog-content\") pod \"aa6b0698-6261-4b10-badd-2ea52206b8dd\" (UID: \"aa6b0698-6261-4b10-badd-2ea52206b8dd\") " Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.772001 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thzm6\" (UniqueName: \"kubernetes.io/projected/aa6b0698-6261-4b10-badd-2ea52206b8dd-kube-api-access-thzm6\") pod \"aa6b0698-6261-4b10-badd-2ea52206b8dd\" (UID: \"aa6b0698-6261-4b10-badd-2ea52206b8dd\") " Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.772239 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa6b0698-6261-4b10-badd-2ea52206b8dd-utilities\") pod \"aa6b0698-6261-4b10-badd-2ea52206b8dd\" (UID: \"aa6b0698-6261-4b10-badd-2ea52206b8dd\") " Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.774319 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa6b0698-6261-4b10-badd-2ea52206b8dd-utilities" (OuterVolumeSpecName: "utilities") pod "aa6b0698-6261-4b10-badd-2ea52206b8dd" (UID: "aa6b0698-6261-4b10-badd-2ea52206b8dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.784667 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa6b0698-6261-4b10-badd-2ea52206b8dd-kube-api-access-thzm6" (OuterVolumeSpecName: "kube-api-access-thzm6") pod "aa6b0698-6261-4b10-badd-2ea52206b8dd" (UID: "aa6b0698-6261-4b10-badd-2ea52206b8dd"). InnerVolumeSpecName "kube-api-access-thzm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.832116 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.876096 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thzm6\" (UniqueName: \"kubernetes.io/projected/aa6b0698-6261-4b10-badd-2ea52206b8dd-kube-api-access-thzm6\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.876127 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa6b0698-6261-4b10-badd-2ea52206b8dd-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.957121 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa6b0698-6261-4b10-badd-2ea52206b8dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa6b0698-6261-4b10-badd-2ea52206b8dd" (UID: "aa6b0698-6261-4b10-badd-2ea52206b8dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.977800 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-log-httpd\") pod \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.977919 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-scripts\") pod \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.977997 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-run-httpd\") pod \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.978066 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-combined-ca-bundle\") pod \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.978184 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-config-data\") pod \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.978220 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-ceilometer-tls-certs\") pod \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.978257 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sj8c\" (UniqueName: \"kubernetes.io/projected/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-kube-api-access-4sj8c\") pod \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.978313 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-sg-core-conf-yaml\") pod \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\" (UID: \"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d\") " Feb 02 11:48:23 crc kubenswrapper[4925]: I0202 11:48:23.978882 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa6b0698-6261-4b10-badd-2ea52206b8dd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.014584 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" (UID: "1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.016033 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" (UID: "1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.021460 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-kube-api-access-4sj8c" (OuterVolumeSpecName: "kube-api-access-4sj8c") pod "1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" (UID: "1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d"). InnerVolumeSpecName "kube-api-access-4sj8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.023350 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-scripts" (OuterVolumeSpecName: "scripts") pod "1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" (UID: "1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.046106 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:48:24 crc kubenswrapper[4925]: W0202 11:48:24.061335 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f7f1ca0_aa5c_48c8_82f1_5d0ef0f1e66e.slice/crio-65a9982e227850cf4acbba4ab0368a342966202ccfddafca82c8f122dfae97fa WatchSource:0}: Error finding container 65a9982e227850cf4acbba4ab0368a342966202ccfddafca82c8f122dfae97fa: Status 404 returned error can't find the container with id 65a9982e227850cf4acbba4ab0368a342966202ccfddafca82c8f122dfae97fa Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.081543 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sj8c\" (UniqueName: \"kubernetes.io/projected/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-kube-api-access-4sj8c\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.081569 4925 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.081578 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.081602 4925 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.083125 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" (UID: "1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.100305 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" (UID: "1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.161290 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" (UID: "1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.170220 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-config-data" (OuterVolumeSpecName: "config-data") pod "1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" (UID: "1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.183930 4925 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.183959 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.183988 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.184001 4925 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.554172 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2j24" event={"ID":"aa6b0698-6261-4b10-badd-2ea52206b8dd","Type":"ContainerDied","Data":"f67da9398dbf6ed673879d45164c6a046ba827a0813110e644b3808750789129"} Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.554226 4925 scope.go:117] "RemoveContainer" containerID="92b0d72bc75e300663c75297d339a0f33d44313e85c8787d8a52d945289fef27" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.554247 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2j24" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.556464 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e","Type":"ContainerStarted","Data":"65a9982e227850cf4acbba4ab0368a342966202ccfddafca82c8f122dfae97fa"} Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.558990 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d","Type":"ContainerDied","Data":"cd490073537e39ef2fa54da3850c5e4658bd0768cbbad46e2d57cf62c1c680f3"} Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.559206 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.583424 4925 scope.go:117] "RemoveContainer" containerID="f31a9c3148a3321599e74abe3ff2713c8344ce065c8637eff7da438b94651f6f" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.591154 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w2j24"] Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.601185 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w2j24"] Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.625344 4925 scope.go:117] "RemoveContainer" containerID="d933e03bad5a5a169ee0eee01aeeb017557c47df4a9c5c11c44a6fd63e7ecd18" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.710283 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa6b0698-6261-4b10-badd-2ea52206b8dd" path="/var/lib/kubelet/pods/aa6b0698-6261-4b10-badd-2ea52206b8dd/volumes" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.711057 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.711465 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.712651 4925 scope.go:117] "RemoveContainer" containerID="2d0262c8ab7349e23b6b642e75f5d551f5697e295c074518a98d854a010f9dfa" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.745163 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:48:24 crc kubenswrapper[4925]: E0202 11:48:24.745668 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa6b0698-6261-4b10-badd-2ea52206b8dd" containerName="extract-content" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.745692 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa6b0698-6261-4b10-badd-2ea52206b8dd" containerName="extract-content" Feb 02 11:48:24 crc kubenswrapper[4925]: E0202 11:48:24.745705 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerName="proxy-httpd" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.745714 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerName="proxy-httpd" Feb 02 11:48:24 crc kubenswrapper[4925]: E0202 11:48:24.745732 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerName="sg-core" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.745739 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerName="sg-core" Feb 02 11:48:24 crc kubenswrapper[4925]: E0202 11:48:24.745756 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa6b0698-6261-4b10-badd-2ea52206b8dd" containerName="registry-server" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.745764 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa6b0698-6261-4b10-badd-2ea52206b8dd" containerName="registry-server" Feb 02 11:48:24 crc kubenswrapper[4925]: E0202 11:48:24.745784 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerName="ceilometer-notification-agent" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.745792 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerName="ceilometer-notification-agent" Feb 02 11:48:24 crc kubenswrapper[4925]: E0202 11:48:24.745804 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerName="ceilometer-central-agent" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.745812 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerName="ceilometer-central-agent" Feb 02 11:48:24 crc kubenswrapper[4925]: E0202 11:48:24.745898 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa6b0698-6261-4b10-badd-2ea52206b8dd" containerName="extract-utilities" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.745909 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa6b0698-6261-4b10-badd-2ea52206b8dd" containerName="extract-utilities" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.746188 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa6b0698-6261-4b10-badd-2ea52206b8dd" containerName="registry-server" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.746205 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerName="proxy-httpd" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.746223 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerName="ceilometer-central-agent" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.746235 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerName="sg-core" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.746246 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" containerName="ceilometer-notification-agent" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.748946 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.751652 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.751897 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.752447 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.756385 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.839225 4925 scope.go:117] "RemoveContainer" containerID="7cba57b1454de04b102aad779a5243e307f47a79b1c937b814177137d1c62b07" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.884465 4925 scope.go:117] "RemoveContainer" containerID="a70c0517f26283b1aac165561f316a3dbd9462d251fc85201a911c8f27c6c6ad" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.901996 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-scripts\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.902424 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.902565 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b41c49d-63e3-462d-a541-092f967fa45e-run-httpd\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.902791 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b41c49d-63e3-462d-a541-092f967fa45e-log-httpd\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.902898 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-config-data\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.902943 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6r4z\" (UniqueName: \"kubernetes.io/projected/6b41c49d-63e3-462d-a541-092f967fa45e-kube-api-access-h6r4z\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.903024 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.903125 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:24 crc kubenswrapper[4925]: I0202 11:48:24.912854 4925 scope.go:117] "RemoveContainer" containerID="bf4acea1d76a24c763dd57c8b8848f75fc99bfe52fd5e968bee4a7945895e282" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:24.988229 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.008111 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-config-data\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.008182 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6r4z\" (UniqueName: \"kubernetes.io/projected/6b41c49d-63e3-462d-a541-092f967fa45e-kube-api-access-h6r4z\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.008230 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.008283 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.008408 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-scripts\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.008484 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.008516 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b41c49d-63e3-462d-a541-092f967fa45e-run-httpd\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.008583 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b41c49d-63e3-462d-a541-092f967fa45e-log-httpd\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.013555 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b41c49d-63e3-462d-a541-092f967fa45e-log-httpd\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.015060 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b41c49d-63e3-462d-a541-092f967fa45e-run-httpd\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.018171 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-config-data\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.021286 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.024696 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.039905 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6r4z\" (UniqueName: \"kubernetes.io/projected/6b41c49d-63e3-462d-a541-092f967fa45e-kube-api-access-h6r4z\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.040259 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.049505 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-scripts\") pod \"ceilometer-0\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " pod="openstack/ceilometer-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.074030 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.110314 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b5fdb995-z5snj" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.189257 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-r8n85"] Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.190318 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" podUID="d9244868-1883-4530-890c-4858c6733192" containerName="dnsmasq-dns" containerID="cri-o://837ea4c47b02e3be518e1aaa904680fac3ba4d04d8eca0a53a5d4a9942d7b41e" gracePeriod=10 Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.421592 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.536883 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2165d1e8-2b5c-4b4d-b55f-d2280523c022-config-data\") pod \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.537016 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmkf2\" (UniqueName: \"kubernetes.io/projected/2165d1e8-2b5c-4b4d-b55f-d2280523c022-kube-api-access-cmkf2\") pod \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.537097 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2165d1e8-2b5c-4b4d-b55f-d2280523c022-logs\") pod \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.537268 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2165d1e8-2b5c-4b4d-b55f-d2280523c022-scripts\") pod \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.537297 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2165d1e8-2b5c-4b4d-b55f-d2280523c022-horizon-secret-key\") pod \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\" (UID: \"2165d1e8-2b5c-4b4d-b55f-d2280523c022\") " Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.540599 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2165d1e8-2b5c-4b4d-b55f-d2280523c022-logs" (OuterVolumeSpecName: "logs") pod "2165d1e8-2b5c-4b4d-b55f-d2280523c022" (UID: "2165d1e8-2b5c-4b4d-b55f-d2280523c022"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.547291 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2165d1e8-2b5c-4b4d-b55f-d2280523c022-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2165d1e8-2b5c-4b4d-b55f-d2280523c022" (UID: "2165d1e8-2b5c-4b4d-b55f-d2280523c022"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.548728 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2165d1e8-2b5c-4b4d-b55f-d2280523c022-kube-api-access-cmkf2" (OuterVolumeSpecName: "kube-api-access-cmkf2") pod "2165d1e8-2b5c-4b4d-b55f-d2280523c022" (UID: "2165d1e8-2b5c-4b4d-b55f-d2280523c022"). InnerVolumeSpecName "kube-api-access-cmkf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.575365 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e","Type":"ContainerStarted","Data":"ab54a6da60eda7073a80e49502f01eed5030968d4c13fa9d6c696134af4c323b"} Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.583230 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cd59fb695-b7252" event={"ID":"2165d1e8-2b5c-4b4d-b55f-d2280523c022","Type":"ContainerDied","Data":"be2081f3e44356e125ab0ee953d112feaf1bb87a41a064623267eee2eb684dd7"} Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.583317 4925 scope.go:117] "RemoveContainer" containerID="5c717323a1ee39e8f9364d15b230ecd1883a4ee0f9eb10bbaefcd7a8e6fbba16" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.583523 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cd59fb695-b7252" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.590061 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2165d1e8-2b5c-4b4d-b55f-d2280523c022-config-data" (OuterVolumeSpecName: "config-data") pod "2165d1e8-2b5c-4b4d-b55f-d2280523c022" (UID: "2165d1e8-2b5c-4b4d-b55f-d2280523c022"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.594484 4925 generic.go:334] "Generic (PLEG): container finished" podID="d9244868-1883-4530-890c-4858c6733192" containerID="837ea4c47b02e3be518e1aaa904680fac3ba4d04d8eca0a53a5d4a9942d7b41e" exitCode=0 Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.594576 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" event={"ID":"d9244868-1883-4530-890c-4858c6733192","Type":"ContainerDied","Data":"837ea4c47b02e3be518e1aaa904680fac3ba4d04d8eca0a53a5d4a9942d7b41e"} Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.600494 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2165d1e8-2b5c-4b4d-b55f-d2280523c022-scripts" (OuterVolumeSpecName: "scripts") pod "2165d1e8-2b5c-4b4d-b55f-d2280523c022" (UID: "2165d1e8-2b5c-4b4d-b55f-d2280523c022"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.640220 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2165d1e8-2b5c-4b4d-b55f-d2280523c022-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.640243 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmkf2\" (UniqueName: \"kubernetes.io/projected/2165d1e8-2b5c-4b4d-b55f-d2280523c022-kube-api-access-cmkf2\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.640253 4925 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2165d1e8-2b5c-4b4d-b55f-d2280523c022-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.640262 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2165d1e8-2b5c-4b4d-b55f-d2280523c022-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.640270 4925 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2165d1e8-2b5c-4b4d-b55f-d2280523c022-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.923537 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6cd59fb695-b7252"] Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.926655 4925 scope.go:117] "RemoveContainer" containerID="a190391bd1cd88e671dc4253974a39581aa4023ad5d6553a83ab852eb3015765" Feb 02 11:48:25 crc kubenswrapper[4925]: I0202 11:48:25.935534 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6cd59fb695-b7252"] Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.016541 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.119525 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.151808 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-c6d58558b-gh6c8" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.177938 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.228537 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-96c5cb844-xrpsd"] Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.256743 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47rl9\" (UniqueName: \"kubernetes.io/projected/6b6bd22b-c4b9-407f-993c-4132ca172b06-kube-api-access-47rl9\") pod \"6b6bd22b-c4b9-407f-993c-4132ca172b06\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.256850 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b6bd22b-c4b9-407f-993c-4132ca172b06-scripts\") pod \"6b6bd22b-c4b9-407f-993c-4132ca172b06\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.256948 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b6bd22b-c4b9-407f-993c-4132ca172b06-config-data\") pod \"6b6bd22b-c4b9-407f-993c-4132ca172b06\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.257054 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b6bd22b-c4b9-407f-993c-4132ca172b06-horizon-secret-key\") pod \"6b6bd22b-c4b9-407f-993c-4132ca172b06\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.257228 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b6bd22b-c4b9-407f-993c-4132ca172b06-logs\") pod \"6b6bd22b-c4b9-407f-993c-4132ca172b06\" (UID: \"6b6bd22b-c4b9-407f-993c-4132ca172b06\") " Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.261174 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b6bd22b-c4b9-407f-993c-4132ca172b06-logs" (OuterVolumeSpecName: "logs") pod "6b6bd22b-c4b9-407f-993c-4132ca172b06" (UID: "6b6bd22b-c4b9-407f-993c-4132ca172b06"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.265985 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b6bd22b-c4b9-407f-993c-4132ca172b06-kube-api-access-47rl9" (OuterVolumeSpecName: "kube-api-access-47rl9") pod "6b6bd22b-c4b9-407f-993c-4132ca172b06" (UID: "6b6bd22b-c4b9-407f-993c-4132ca172b06"). InnerVolumeSpecName "kube-api-access-47rl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.277156 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b6bd22b-c4b9-407f-993c-4132ca172b06-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6b6bd22b-c4b9-407f-993c-4132ca172b06" (UID: "6b6bd22b-c4b9-407f-993c-4132ca172b06"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.294399 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b6bd22b-c4b9-407f-993c-4132ca172b06-config-data" (OuterVolumeSpecName: "config-data") pod "6b6bd22b-c4b9-407f-993c-4132ca172b06" (UID: "6b6bd22b-c4b9-407f-993c-4132ca172b06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.309323 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b6bd22b-c4b9-407f-993c-4132ca172b06-scripts" (OuterVolumeSpecName: "scripts") pod "6b6bd22b-c4b9-407f-993c-4132ca172b06" (UID: "6b6bd22b-c4b9-407f-993c-4132ca172b06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.359461 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-openstack-edpm-ipam\") pod \"d9244868-1883-4530-890c-4858c6733192\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.359876 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-ovsdbserver-sb\") pod \"d9244868-1883-4530-890c-4858c6733192\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.360006 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfhl6\" (UniqueName: \"kubernetes.io/projected/d9244868-1883-4530-890c-4858c6733192-kube-api-access-qfhl6\") pod \"d9244868-1883-4530-890c-4858c6733192\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.360638 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-dns-svc\") pod \"d9244868-1883-4530-890c-4858c6733192\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.360693 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-config\") pod \"d9244868-1883-4530-890c-4858c6733192\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.360739 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-ovsdbserver-nb\") pod \"d9244868-1883-4530-890c-4858c6733192\" (UID: \"d9244868-1883-4530-890c-4858c6733192\") " Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.361784 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b6bd22b-c4b9-407f-993c-4132ca172b06-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.361809 4925 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b6bd22b-c4b9-407f-993c-4132ca172b06-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.361825 4925 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b6bd22b-c4b9-407f-993c-4132ca172b06-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.361836 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47rl9\" (UniqueName: \"kubernetes.io/projected/6b6bd22b-c4b9-407f-993c-4132ca172b06-kube-api-access-47rl9\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.361849 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b6bd22b-c4b9-407f-993c-4132ca172b06-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.367358 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9244868-1883-4530-890c-4858c6733192-kube-api-access-qfhl6" (OuterVolumeSpecName: "kube-api-access-qfhl6") pod "d9244868-1883-4530-890c-4858c6733192" (UID: "d9244868-1883-4530-890c-4858c6733192"). InnerVolumeSpecName "kube-api-access-qfhl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.434171 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "d9244868-1883-4530-890c-4858c6733192" (UID: "d9244868-1883-4530-890c-4858c6733192"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.453372 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d9244868-1883-4530-890c-4858c6733192" (UID: "d9244868-1883-4530-890c-4858c6733192"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.464131 4925 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.464292 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.464328 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfhl6\" (UniqueName: \"kubernetes.io/projected/d9244868-1883-4530-890c-4858c6733192-kube-api-access-qfhl6\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.472686 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-config" (OuterVolumeSpecName: "config") pod "d9244868-1883-4530-890c-4858c6733192" (UID: "d9244868-1883-4530-890c-4858c6733192"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.495868 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d9244868-1883-4530-890c-4858c6733192" (UID: "d9244868-1883-4530-890c-4858c6733192"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.510188 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d9244868-1883-4530-890c-4858c6733192" (UID: "d9244868-1883-4530-890c-4858c6733192"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.521689 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:48:26 crc kubenswrapper[4925]: W0202 11:48:26.523991 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b41c49d_63e3_462d_a541_092f967fa45e.slice/crio-06e7b2c37441965e0bcd7aa230da1387617c6ad81fb58d08b38b2badb0169e04 WatchSource:0}: Error finding container 06e7b2c37441965e0bcd7aa230da1387617c6ad81fb58d08b38b2badb0169e04: Status 404 returned error can't find the container with id 06e7b2c37441965e0bcd7aa230da1387617c6ad81fb58d08b38b2badb0169e04 Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.566410 4925 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.566448 4925 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.566460 4925 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9244868-1883-4530-890c-4858c6733192-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.631678 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" event={"ID":"d9244868-1883-4530-890c-4858c6733192","Type":"ContainerDied","Data":"4a9a76f845d869116f908796032a9b3955023696c56f48fdb1361e437380dbc2"} Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.631702 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-r8n85" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.631728 4925 scope.go:117] "RemoveContainer" containerID="837ea4c47b02e3be518e1aaa904680fac3ba4d04d8eca0a53a5d4a9942d7b41e" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.633933 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7","Type":"ContainerStarted","Data":"c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48"} Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.633989 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7","Type":"ContainerStarted","Data":"c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d"} Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.637114 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b41c49d-63e3-462d-a541-092f967fa45e","Type":"ContainerStarted","Data":"06e7b2c37441965e0bcd7aa230da1387617c6ad81fb58d08b38b2badb0169e04"} Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.639930 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5977b94d89-jwxrq" event={"ID":"6b6bd22b-c4b9-407f-993c-4132ca172b06","Type":"ContainerDied","Data":"d521b4b484e2a34e1967e9a9132d844712c6f499f49f96ccccb9d17d24c81a10"} Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.640009 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5977b94d89-jwxrq" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.659823 4925 scope.go:117] "RemoveContainer" containerID="4f4ceb0bfa28f8e12e0e79069b88f740702f638ee71bcfc48ccb806908a72a83" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.663537 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e","Type":"ContainerStarted","Data":"cd98faec4fa2fc2f7829f0263372fe06420224719b35e6684ee3a3a017f06ca6"} Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.671880 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-96c5cb844-xrpsd" podUID="1315a531-ca20-494e-9273-dfa832b62744" containerName="horizon-log" containerID="cri-o://4cf5134ea8bc9226a2250703fc585cd09993a5838777b5b7aaf3a00bf1bdcb24" gracePeriod=30 Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.672257 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-96c5cb844-xrpsd" podUID="1315a531-ca20-494e-9273-dfa832b62744" containerName="horizon" containerID="cri-o://2a2caab84eef03021831e24f792ffa3c52397dade541be444489851fc252f99d" gracePeriod=30 Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.679327 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.465579728 podStartE2EDuration="12.679303564s" podCreationTimestamp="2026-02-02 11:48:14 +0000 UTC" firstStartedPulling="2026-02-02 11:48:15.710278219 +0000 UTC m=+3072.714527181" lastFinishedPulling="2026-02-02 11:48:24.924002055 +0000 UTC m=+3081.928251017" observedRunningTime="2026-02-02 11:48:26.669969911 +0000 UTC m=+3083.674218893" watchObservedRunningTime="2026-02-02 11:48:26.679303564 +0000 UTC m=+3083.683552526" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.699536 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d" path="/var/lib/kubelet/pods/1ff8d210-8c8f-4320-b2eb-6f7ad5d4cf6d/volumes" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.721777 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2165d1e8-2b5c-4b4d-b55f-d2280523c022" path="/var/lib/kubelet/pods/2165d1e8-2b5c-4b4d-b55f-d2280523c022/volumes" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.722458 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.726544 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-r8n85"] Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.737137 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-r8n85"] Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.739100 4925 scope.go:117] "RemoveContainer" containerID="0811e307a0adbe2450e984038467b742e6ee1d31a445ecec67ab18f02e8ee1a7" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.752363 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5977b94d89-jwxrq"] Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.763529 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5977b94d89-jwxrq"] Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.778111 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=6.778089466 podStartE2EDuration="6.778089466s" podCreationTimestamp="2026-02-02 11:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:48:26.740634869 +0000 UTC m=+3083.744883851" watchObservedRunningTime="2026-02-02 11:48:26.778089466 +0000 UTC m=+3083.782338428" Feb 02 11:48:26 crc kubenswrapper[4925]: I0202 11:48:26.939635 4925 scope.go:117] "RemoveContainer" containerID="740a8487a4456f778e7daa3cf6d81d35c2d724bbf137bd17dbc4cc68dd1643a0" Feb 02 11:48:27 crc kubenswrapper[4925]: I0202 11:48:27.692817 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b41c49d-63e3-462d-a541-092f967fa45e","Type":"ContainerStarted","Data":"133c424f847c93efe5aa25ab0b9a5a9a4325808a37335b57e8c9e23891a8f4db"} Feb 02 11:48:28 crc kubenswrapper[4925]: I0202 11:48:28.674584 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b6bd22b-c4b9-407f-993c-4132ca172b06" path="/var/lib/kubelet/pods/6b6bd22b-c4b9-407f-993c-4132ca172b06/volumes" Feb 02 11:48:28 crc kubenswrapper[4925]: I0202 11:48:28.675783 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9244868-1883-4530-890c-4858c6733192" path="/var/lib/kubelet/pods/d9244868-1883-4530-890c-4858c6733192/volumes" Feb 02 11:48:28 crc kubenswrapper[4925]: I0202 11:48:28.707907 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b41c49d-63e3-462d-a541-092f967fa45e","Type":"ContainerStarted","Data":"5c31307d753cc16f69dd3ce1102c1c570ce11afcbc06220d80d0f4c652ea4469"} Feb 02 11:48:28 crc kubenswrapper[4925]: I0202 11:48:28.707977 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b41c49d-63e3-462d-a541-092f967fa45e","Type":"ContainerStarted","Data":"4be240fecdbecc856517261b7509c2c2ce831cfd31cb99d3aec0676437aec628"} Feb 02 11:48:28 crc kubenswrapper[4925]: I0202 11:48:28.732556 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:48:30 crc kubenswrapper[4925]: I0202 11:48:30.665109 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:48:30 crc kubenswrapper[4925]: E0202 11:48:30.665852 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:48:30 crc kubenswrapper[4925]: I0202 11:48:30.725282 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b41c49d-63e3-462d-a541-092f967fa45e","Type":"ContainerStarted","Data":"5c2ce5e7fae0bf9679d7b8532e243129d31ee47c65b0559e442acc25731e9a84"} Feb 02 11:48:30 crc kubenswrapper[4925]: I0202 11:48:30.725647 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" containerName="ceilometer-central-agent" containerID="cri-o://133c424f847c93efe5aa25ab0b9a5a9a4325808a37335b57e8c9e23891a8f4db" gracePeriod=30 Feb 02 11:48:30 crc kubenswrapper[4925]: I0202 11:48:30.725665 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" containerName="sg-core" containerID="cri-o://5c31307d753cc16f69dd3ce1102c1c570ce11afcbc06220d80d0f4c652ea4469" gracePeriod=30 Feb 02 11:48:30 crc kubenswrapper[4925]: I0202 11:48:30.725688 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" containerName="ceilometer-notification-agent" containerID="cri-o://4be240fecdbecc856517261b7509c2c2ce831cfd31cb99d3aec0676437aec628" gracePeriod=30 Feb 02 11:48:30 crc kubenswrapper[4925]: I0202 11:48:30.725707 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" containerName="proxy-httpd" containerID="cri-o://5c2ce5e7fae0bf9679d7b8532e243129d31ee47c65b0559e442acc25731e9a84" gracePeriod=30 Feb 02 11:48:30 crc kubenswrapper[4925]: I0202 11:48:30.730201 4925 generic.go:334] "Generic (PLEG): container finished" podID="1315a531-ca20-494e-9273-dfa832b62744" containerID="2a2caab84eef03021831e24f792ffa3c52397dade541be444489851fc252f99d" exitCode=0 Feb 02 11:48:30 crc kubenswrapper[4925]: I0202 11:48:30.730249 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96c5cb844-xrpsd" event={"ID":"1315a531-ca20-494e-9273-dfa832b62744","Type":"ContainerDied","Data":"2a2caab84eef03021831e24f792ffa3c52397dade541be444489851fc252f99d"} Feb 02 11:48:30 crc kubenswrapper[4925]: I0202 11:48:30.751965 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.92556898 podStartE2EDuration="6.75194753s" podCreationTimestamp="2026-02-02 11:48:24 +0000 UTC" firstStartedPulling="2026-02-02 11:48:26.526672881 +0000 UTC m=+3083.530921843" lastFinishedPulling="2026-02-02 11:48:30.353051441 +0000 UTC m=+3087.357300393" observedRunningTime="2026-02-02 11:48:30.744612871 +0000 UTC m=+3087.748861833" watchObservedRunningTime="2026-02-02 11:48:30.75194753 +0000 UTC m=+3087.756196492" Feb 02 11:48:30 crc kubenswrapper[4925]: I0202 11:48:30.805031 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-96c5cb844-xrpsd" podUID="1315a531-ca20-494e-9273-dfa832b62744" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Feb 02 11:48:31 crc kubenswrapper[4925]: I0202 11:48:31.742738 4925 generic.go:334] "Generic (PLEG): container finished" podID="6b41c49d-63e3-462d-a541-092f967fa45e" containerID="5c31307d753cc16f69dd3ce1102c1c570ce11afcbc06220d80d0f4c652ea4469" exitCode=2 Feb 02 11:48:31 crc kubenswrapper[4925]: I0202 11:48:31.743114 4925 generic.go:334] "Generic (PLEG): container finished" podID="6b41c49d-63e3-462d-a541-092f967fa45e" containerID="4be240fecdbecc856517261b7509c2c2ce831cfd31cb99d3aec0676437aec628" exitCode=0 Feb 02 11:48:31 crc kubenswrapper[4925]: I0202 11:48:31.742793 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b41c49d-63e3-462d-a541-092f967fa45e","Type":"ContainerDied","Data":"5c31307d753cc16f69dd3ce1102c1c570ce11afcbc06220d80d0f4c652ea4469"} Feb 02 11:48:31 crc kubenswrapper[4925]: I0202 11:48:31.743154 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b41c49d-63e3-462d-a541-092f967fa45e","Type":"ContainerDied","Data":"4be240fecdbecc856517261b7509c2c2ce831cfd31cb99d3aec0676437aec628"} Feb 02 11:48:32 crc kubenswrapper[4925]: I0202 11:48:32.753223 4925 generic.go:334] "Generic (PLEG): container finished" podID="6b41c49d-63e3-462d-a541-092f967fa45e" containerID="133c424f847c93efe5aa25ab0b9a5a9a4325808a37335b57e8c9e23891a8f4db" exitCode=0 Feb 02 11:48:32 crc kubenswrapper[4925]: I0202 11:48:32.753297 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b41c49d-63e3-462d-a541-092f967fa45e","Type":"ContainerDied","Data":"133c424f847c93efe5aa25ab0b9a5a9a4325808a37335b57e8c9e23891a8f4db"} Feb 02 11:48:34 crc kubenswrapper[4925]: I0202 11:48:34.961560 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 02 11:48:36 crc kubenswrapper[4925]: I0202 11:48:36.617101 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 02 11:48:36 crc kubenswrapper[4925]: I0202 11:48:36.692828 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:48:36 crc kubenswrapper[4925]: I0202 11:48:36.788410 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="0da3d601-adc6-40aa-9e21-697e239bdfa2" containerName="manila-scheduler" containerID="cri-o://3c4d28f035f0267f243a8cc53fd6f99219e41ab37c20f4020ec589b5c65bf4c9" gracePeriod=30 Feb 02 11:48:36 crc kubenswrapper[4925]: I0202 11:48:36.788496 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="0da3d601-adc6-40aa-9e21-697e239bdfa2" containerName="probe" containerID="cri-o://60bc72c7c9c96111e91e98f6c17d3a91f12ad15e1d2f06485961c6a4196de0dc" gracePeriod=30 Feb 02 11:48:37 crc kubenswrapper[4925]: I0202 11:48:37.798642 4925 generic.go:334] "Generic (PLEG): container finished" podID="0da3d601-adc6-40aa-9e21-697e239bdfa2" containerID="60bc72c7c9c96111e91e98f6c17d3a91f12ad15e1d2f06485961c6a4196de0dc" exitCode=0 Feb 02 11:48:37 crc kubenswrapper[4925]: I0202 11:48:37.798697 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"0da3d601-adc6-40aa-9e21-697e239bdfa2","Type":"ContainerDied","Data":"60bc72c7c9c96111e91e98f6c17d3a91f12ad15e1d2f06485961c6a4196de0dc"} Feb 02 11:48:40 crc kubenswrapper[4925]: I0202 11:48:40.804428 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-96c5cb844-xrpsd" podUID="1315a531-ca20-494e-9273-dfa832b62744" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Feb 02 11:48:41 crc kubenswrapper[4925]: I0202 11:48:41.840766 4925 generic.go:334] "Generic (PLEG): container finished" podID="0da3d601-adc6-40aa-9e21-697e239bdfa2" containerID="3c4d28f035f0267f243a8cc53fd6f99219e41ab37c20f4020ec589b5c65bf4c9" exitCode=0 Feb 02 11:48:41 crc kubenswrapper[4925]: I0202 11:48:41.840939 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"0da3d601-adc6-40aa-9e21-697e239bdfa2","Type":"ContainerDied","Data":"3c4d28f035f0267f243a8cc53fd6f99219e41ab37c20f4020ec589b5c65bf4c9"} Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.028536 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.176639 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-config-data\") pod \"0da3d601-adc6-40aa-9e21-697e239bdfa2\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.176756 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-scripts\") pod \"0da3d601-adc6-40aa-9e21-697e239bdfa2\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.176852 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk9v4\" (UniqueName: \"kubernetes.io/projected/0da3d601-adc6-40aa-9e21-697e239bdfa2-kube-api-access-sk9v4\") pod \"0da3d601-adc6-40aa-9e21-697e239bdfa2\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.176882 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0da3d601-adc6-40aa-9e21-697e239bdfa2-etc-machine-id\") pod \"0da3d601-adc6-40aa-9e21-697e239bdfa2\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.176978 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-config-data-custom\") pod \"0da3d601-adc6-40aa-9e21-697e239bdfa2\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.177052 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-combined-ca-bundle\") pod \"0da3d601-adc6-40aa-9e21-697e239bdfa2\" (UID: \"0da3d601-adc6-40aa-9e21-697e239bdfa2\") " Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.178366 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0da3d601-adc6-40aa-9e21-697e239bdfa2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0da3d601-adc6-40aa-9e21-697e239bdfa2" (UID: "0da3d601-adc6-40aa-9e21-697e239bdfa2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.185445 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0da3d601-adc6-40aa-9e21-697e239bdfa2" (UID: "0da3d601-adc6-40aa-9e21-697e239bdfa2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.186105 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0da3d601-adc6-40aa-9e21-697e239bdfa2-kube-api-access-sk9v4" (OuterVolumeSpecName: "kube-api-access-sk9v4") pod "0da3d601-adc6-40aa-9e21-697e239bdfa2" (UID: "0da3d601-adc6-40aa-9e21-697e239bdfa2"). InnerVolumeSpecName "kube-api-access-sk9v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.189638 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-scripts" (OuterVolumeSpecName: "scripts") pod "0da3d601-adc6-40aa-9e21-697e239bdfa2" (UID: "0da3d601-adc6-40aa-9e21-697e239bdfa2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.241835 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0da3d601-adc6-40aa-9e21-697e239bdfa2" (UID: "0da3d601-adc6-40aa-9e21-697e239bdfa2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.276874 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-config-data" (OuterVolumeSpecName: "config-data") pod "0da3d601-adc6-40aa-9e21-697e239bdfa2" (UID: "0da3d601-adc6-40aa-9e21-697e239bdfa2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.280105 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk9v4\" (UniqueName: \"kubernetes.io/projected/0da3d601-adc6-40aa-9e21-697e239bdfa2-kube-api-access-sk9v4\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.280135 4925 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0da3d601-adc6-40aa-9e21-697e239bdfa2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.280156 4925 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.280167 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.280178 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.280188 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da3d601-adc6-40aa-9e21-697e239bdfa2-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.303819 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.850968 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"0da3d601-adc6-40aa-9e21-697e239bdfa2","Type":"ContainerDied","Data":"55e088a5d6c3424ab8c02634916877cf4fb90c5c8e29351baafbc04748c2fdaf"} Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.851024 4925 scope.go:117] "RemoveContainer" containerID="60bc72c7c9c96111e91e98f6c17d3a91f12ad15e1d2f06485961c6a4196de0dc" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.851845 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.876131 4925 scope.go:117] "RemoveContainer" containerID="3c4d28f035f0267f243a8cc53fd6f99219e41ab37c20f4020ec589b5c65bf4c9" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.878635 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.896562 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.910181 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:48:42 crc kubenswrapper[4925]: E0202 11:48:42.910710 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da3d601-adc6-40aa-9e21-697e239bdfa2" containerName="probe" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.910733 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da3d601-adc6-40aa-9e21-697e239bdfa2" containerName="probe" Feb 02 11:48:42 crc kubenswrapper[4925]: E0202 11:48:42.910748 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6bd22b-c4b9-407f-993c-4132ca172b06" containerName="horizon-log" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.910757 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6bd22b-c4b9-407f-993c-4132ca172b06" containerName="horizon-log" Feb 02 11:48:42 crc kubenswrapper[4925]: E0202 11:48:42.910774 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6bd22b-c4b9-407f-993c-4132ca172b06" containerName="horizon" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.910782 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6bd22b-c4b9-407f-993c-4132ca172b06" containerName="horizon" Feb 02 11:48:42 crc kubenswrapper[4925]: E0202 11:48:42.910793 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9244868-1883-4530-890c-4858c6733192" containerName="init" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.910801 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9244868-1883-4530-890c-4858c6733192" containerName="init" Feb 02 11:48:42 crc kubenswrapper[4925]: E0202 11:48:42.910813 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2165d1e8-2b5c-4b4d-b55f-d2280523c022" containerName="horizon-log" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.910823 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="2165d1e8-2b5c-4b4d-b55f-d2280523c022" containerName="horizon-log" Feb 02 11:48:42 crc kubenswrapper[4925]: E0202 11:48:42.910847 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9244868-1883-4530-890c-4858c6733192" containerName="dnsmasq-dns" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.910855 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9244868-1883-4530-890c-4858c6733192" containerName="dnsmasq-dns" Feb 02 11:48:42 crc kubenswrapper[4925]: E0202 11:48:42.910883 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2165d1e8-2b5c-4b4d-b55f-d2280523c022" containerName="horizon" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.910892 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="2165d1e8-2b5c-4b4d-b55f-d2280523c022" containerName="horizon" Feb 02 11:48:42 crc kubenswrapper[4925]: E0202 11:48:42.910912 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da3d601-adc6-40aa-9e21-697e239bdfa2" containerName="manila-scheduler" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.910920 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da3d601-adc6-40aa-9e21-697e239bdfa2" containerName="manila-scheduler" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.911162 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b6bd22b-c4b9-407f-993c-4132ca172b06" containerName="horizon-log" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.911181 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="2165d1e8-2b5c-4b4d-b55f-d2280523c022" containerName="horizon-log" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.911197 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da3d601-adc6-40aa-9e21-697e239bdfa2" containerName="manila-scheduler" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.911221 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b6bd22b-c4b9-407f-993c-4132ca172b06" containerName="horizon" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.911234 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="2165d1e8-2b5c-4b4d-b55f-d2280523c022" containerName="horizon" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.911249 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da3d601-adc6-40aa-9e21-697e239bdfa2" containerName="probe" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.911265 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9244868-1883-4530-890c-4858c6733192" containerName="dnsmasq-dns" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.912985 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.916792 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 02 11:48:42 crc kubenswrapper[4925]: I0202 11:48:42.924781 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.094158 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a002372-3866-4b2a-8f00-f5ae284f9e62-config-data\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.094487 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a002372-3866-4b2a-8f00-f5ae284f9e62-scripts\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.094527 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a002372-3866-4b2a-8f00-f5ae284f9e62-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.094784 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w2dr\" (UniqueName: \"kubernetes.io/projected/4a002372-3866-4b2a-8f00-f5ae284f9e62-kube-api-access-8w2dr\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.094858 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a002372-3866-4b2a-8f00-f5ae284f9e62-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.094912 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a002372-3866-4b2a-8f00-f5ae284f9e62-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.196833 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w2dr\" (UniqueName: \"kubernetes.io/projected/4a002372-3866-4b2a-8f00-f5ae284f9e62-kube-api-access-8w2dr\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.197064 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a002372-3866-4b2a-8f00-f5ae284f9e62-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.197153 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a002372-3866-4b2a-8f00-f5ae284f9e62-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.197276 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a002372-3866-4b2a-8f00-f5ae284f9e62-config-data\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.197166 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a002372-3866-4b2a-8f00-f5ae284f9e62-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.197341 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a002372-3866-4b2a-8f00-f5ae284f9e62-scripts\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.197487 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a002372-3866-4b2a-8f00-f5ae284f9e62-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.202523 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a002372-3866-4b2a-8f00-f5ae284f9e62-scripts\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.202746 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a002372-3866-4b2a-8f00-f5ae284f9e62-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.202997 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a002372-3866-4b2a-8f00-f5ae284f9e62-config-data\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.205342 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a002372-3866-4b2a-8f00-f5ae284f9e62-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.213568 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w2dr\" (UniqueName: \"kubernetes.io/projected/4a002372-3866-4b2a-8f00-f5ae284f9e62-kube-api-access-8w2dr\") pod \"manila-scheduler-0\" (UID: \"4a002372-3866-4b2a-8f00-f5ae284f9e62\") " pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.240241 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:48:43 crc kubenswrapper[4925]: W0202 11:48:43.680835 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a002372_3866_4b2a_8f00_f5ae284f9e62.slice/crio-5f215ac37899ae81aefb3d0d06bf4029992e38a447186216b6aa1bc553430ba2 WatchSource:0}: Error finding container 5f215ac37899ae81aefb3d0d06bf4029992e38a447186216b6aa1bc553430ba2: Status 404 returned error can't find the container with id 5f215ac37899ae81aefb3d0d06bf4029992e38a447186216b6aa1bc553430ba2 Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.683872 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:48:43 crc kubenswrapper[4925]: I0202 11:48:43.862754 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"4a002372-3866-4b2a-8f00-f5ae284f9e62","Type":"ContainerStarted","Data":"5f215ac37899ae81aefb3d0d06bf4029992e38a447186216b6aa1bc553430ba2"} Feb 02 11:48:44 crc kubenswrapper[4925]: I0202 11:48:44.673983 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:48:44 crc kubenswrapper[4925]: I0202 11:48:44.674735 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0da3d601-adc6-40aa-9e21-697e239bdfa2" path="/var/lib/kubelet/pods/0da3d601-adc6-40aa-9e21-697e239bdfa2/volumes" Feb 02 11:48:44 crc kubenswrapper[4925]: E0202 11:48:44.674748 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:48:44 crc kubenswrapper[4925]: I0202 11:48:44.878219 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"4a002372-3866-4b2a-8f00-f5ae284f9e62","Type":"ContainerStarted","Data":"99ed4b63147149c8146f654d5195a86382d96afa10bcc08791293e78af08f2f6"} Feb 02 11:48:44 crc kubenswrapper[4925]: I0202 11:48:44.878271 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"4a002372-3866-4b2a-8f00-f5ae284f9e62","Type":"ContainerStarted","Data":"2d1b1c5808e9f7999f399d49a4f41cc427853a0f333bf540eef545340aedb1fe"} Feb 02 11:48:44 crc kubenswrapper[4925]: I0202 11:48:44.896995 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.896975157 podStartE2EDuration="2.896975157s" podCreationTimestamp="2026-02-02 11:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:48:44.89378212 +0000 UTC m=+3101.898031092" watchObservedRunningTime="2026-02-02 11:48:44.896975157 +0000 UTC m=+3101.901224119" Feb 02 11:48:46 crc kubenswrapper[4925]: I0202 11:48:46.422859 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 02 11:48:46 crc kubenswrapper[4925]: I0202 11:48:46.480289 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:48:46 crc kubenswrapper[4925]: I0202 11:48:46.893682 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" containerName="manila-share" containerID="cri-o://c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48" gracePeriod=30 Feb 02 11:48:46 crc kubenswrapper[4925]: I0202 11:48:46.893757 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" containerName="probe" containerID="cri-o://c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d" gracePeriod=30 Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.845583 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.907895 4925 generic.go:334] "Generic (PLEG): container finished" podID="a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" containerID="c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d" exitCode=0 Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.907934 4925 generic.go:334] "Generic (PLEG): container finished" podID="a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" containerID="c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48" exitCode=1 Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.907960 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7","Type":"ContainerDied","Data":"c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d"} Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.907999 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7","Type":"ContainerDied","Data":"c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48"} Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.908013 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7","Type":"ContainerDied","Data":"3bdc2e2e6340f052d794a4222ac5126f7385691b635e0e51e34004332c24e3c0"} Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.908020 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.908032 4925 scope.go:117] "RemoveContainer" containerID="c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d" Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.938377 4925 scope.go:117] "RemoveContainer" containerID="c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48" Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.967001 4925 scope.go:117] "RemoveContainer" containerID="c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d" Feb 02 11:48:47 crc kubenswrapper[4925]: E0202 11:48:47.967452 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d\": container with ID starting with c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d not found: ID does not exist" containerID="c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d" Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.967497 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d"} err="failed to get container status \"c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d\": rpc error: code = NotFound desc = could not find container \"c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d\": container with ID starting with c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d not found: ID does not exist" Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.967526 4925 scope.go:117] "RemoveContainer" containerID="c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48" Feb 02 11:48:47 crc kubenswrapper[4925]: E0202 11:48:47.968018 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48\": container with ID starting with c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48 not found: ID does not exist" containerID="c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48" Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.968051 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48"} err="failed to get container status \"c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48\": rpc error: code = NotFound desc = could not find container \"c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48\": container with ID starting with c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48 not found: ID does not exist" Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.968067 4925 scope.go:117] "RemoveContainer" containerID="c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d" Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.968518 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d"} err="failed to get container status \"c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d\": rpc error: code = NotFound desc = could not find container \"c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d\": container with ID starting with c59cbabacb9f8dad9317672edca163f6f11ff400407b29bacbb0a8a803d0db6d not found: ID does not exist" Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.968553 4925 scope.go:117] "RemoveContainer" containerID="c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48" Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.968863 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48"} err="failed to get container status \"c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48\": rpc error: code = NotFound desc = could not find container \"c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48\": container with ID starting with c2f52bb5826601ce94fe44db73a952dd3a45560cc2bb4066b5fc9955c65add48 not found: ID does not exist" Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.984326 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-ceph\") pod \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.984457 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-config-data-custom\") pod \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.984516 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25dr7\" (UniqueName: \"kubernetes.io/projected/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-kube-api-access-25dr7\") pod \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.984545 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-config-data\") pod \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.984601 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-scripts\") pod \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.984672 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-var-lib-manila\") pod \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.984743 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-combined-ca-bundle\") pod \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.984761 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-etc-machine-id\") pod \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\" (UID: \"a2c2fb0b-4fd4-4a03-8c55-9b11163458d7\") " Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.985145 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" (UID: "a2c2fb0b-4fd4-4a03-8c55-9b11163458d7"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.985254 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" (UID: "a2c2fb0b-4fd4-4a03-8c55-9b11163458d7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.991670 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" (UID: "a2c2fb0b-4fd4-4a03-8c55-9b11163458d7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.992868 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-kube-api-access-25dr7" (OuterVolumeSpecName: "kube-api-access-25dr7") pod "a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" (UID: "a2c2fb0b-4fd4-4a03-8c55-9b11163458d7"). InnerVolumeSpecName "kube-api-access-25dr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.993208 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-scripts" (OuterVolumeSpecName: "scripts") pod "a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" (UID: "a2c2fb0b-4fd4-4a03-8c55-9b11163458d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:47 crc kubenswrapper[4925]: I0202 11:48:47.993387 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-ceph" (OuterVolumeSpecName: "ceph") pod "a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" (UID: "a2c2fb0b-4fd4-4a03-8c55-9b11163458d7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.039875 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" (UID: "a2c2fb0b-4fd4-4a03-8c55-9b11163458d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.087005 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.087040 4925 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-var-lib-manila\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.087052 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.087061 4925 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.087119 4925 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.087617 4925 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.087638 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25dr7\" (UniqueName: \"kubernetes.io/projected/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-kube-api-access-25dr7\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.097594 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-config-data" (OuterVolumeSpecName: "config-data") pod "a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" (UID: "a2c2fb0b-4fd4-4a03-8c55-9b11163458d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.189548 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.245478 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.253794 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.277127 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:48:48 crc kubenswrapper[4925]: E0202 11:48:48.277517 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" containerName="probe" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.277534 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" containerName="probe" Feb 02 11:48:48 crc kubenswrapper[4925]: E0202 11:48:48.277550 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" containerName="manila-share" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.277557 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" containerName="manila-share" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.277728 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" containerName="manila-share" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.277747 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" containerName="probe" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.278676 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.280540 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.296720 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.393010 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6f74fd7-2cf3-4fc7-9535-50503f677c96-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.393817 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f74fd7-2cf3-4fc7-9535-50503f677c96-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.393900 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6f74fd7-2cf3-4fc7-9535-50503f677c96-scripts\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.394380 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b6f74fd7-2cf3-4fc7-9535-50503f677c96-ceph\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.394423 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6f74fd7-2cf3-4fc7-9535-50503f677c96-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.394492 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6f74fd7-2cf3-4fc7-9535-50503f677c96-config-data\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.394517 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/b6f74fd7-2cf3-4fc7-9535-50503f677c96-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.394537 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78sck\" (UniqueName: \"kubernetes.io/projected/b6f74fd7-2cf3-4fc7-9535-50503f677c96-kube-api-access-78sck\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.496731 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6f74fd7-2cf3-4fc7-9535-50503f677c96-scripts\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.496806 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b6f74fd7-2cf3-4fc7-9535-50503f677c96-ceph\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.496851 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6f74fd7-2cf3-4fc7-9535-50503f677c96-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.496898 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6f74fd7-2cf3-4fc7-9535-50503f677c96-config-data\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.496926 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/b6f74fd7-2cf3-4fc7-9535-50503f677c96-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.496959 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78sck\" (UniqueName: \"kubernetes.io/projected/b6f74fd7-2cf3-4fc7-9535-50503f677c96-kube-api-access-78sck\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.497015 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6f74fd7-2cf3-4fc7-9535-50503f677c96-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.497119 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f74fd7-2cf3-4fc7-9535-50503f677c96-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.497813 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/b6f74fd7-2cf3-4fc7-9535-50503f677c96-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.497847 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6f74fd7-2cf3-4fc7-9535-50503f677c96-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.500784 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b6f74fd7-2cf3-4fc7-9535-50503f677c96-ceph\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.500840 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6f74fd7-2cf3-4fc7-9535-50503f677c96-scripts\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.501383 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f74fd7-2cf3-4fc7-9535-50503f677c96-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.501803 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6f74fd7-2cf3-4fc7-9535-50503f677c96-config-data\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.502751 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6f74fd7-2cf3-4fc7-9535-50503f677c96-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.515862 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78sck\" (UniqueName: \"kubernetes.io/projected/b6f74fd7-2cf3-4fc7-9535-50503f677c96-kube-api-access-78sck\") pod \"manila-share-share1-0\" (UID: \"b6f74fd7-2cf3-4fc7-9535-50503f677c96\") " pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.600595 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:48:48 crc kubenswrapper[4925]: I0202 11:48:48.676902 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2c2fb0b-4fd4-4a03-8c55-9b11163458d7" path="/var/lib/kubelet/pods/a2c2fb0b-4fd4-4a03-8c55-9b11163458d7/volumes" Feb 02 11:48:49 crc kubenswrapper[4925]: I0202 11:48:49.124017 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:48:49 crc kubenswrapper[4925]: I0202 11:48:49.926444 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"b6f74fd7-2cf3-4fc7-9535-50503f677c96","Type":"ContainerStarted","Data":"28984e8c783a0f755aac96f0a179030d9566574be12942ee9bbee0425d26357d"} Feb 02 11:48:49 crc kubenswrapper[4925]: I0202 11:48:49.927124 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"b6f74fd7-2cf3-4fc7-9535-50503f677c96","Type":"ContainerStarted","Data":"80c58f21ba01d2c62b48ffad1272992e8ca6f3973ae82bd935888ea5780ee9d1"} Feb 02 11:48:50 crc kubenswrapper[4925]: I0202 11:48:50.805234 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-96c5cb844-xrpsd" podUID="1315a531-ca20-494e-9273-dfa832b62744" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Feb 02 11:48:50 crc kubenswrapper[4925]: I0202 11:48:50.805891 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:48:50 crc kubenswrapper[4925]: I0202 11:48:50.978428 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"b6f74fd7-2cf3-4fc7-9535-50503f677c96","Type":"ContainerStarted","Data":"f6abc2c142e850bbcc854ad150f4e0961534f6edf43f3cb31514e7f3e091f0b4"} Feb 02 11:48:51 crc kubenswrapper[4925]: I0202 11:48:51.005589 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.005571151 podStartE2EDuration="3.005571151s" podCreationTimestamp="2026-02-02 11:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:48:51.002703193 +0000 UTC m=+3108.006952145" watchObservedRunningTime="2026-02-02 11:48:51.005571151 +0000 UTC m=+3108.009820113" Feb 02 11:48:53 crc kubenswrapper[4925]: I0202 11:48:53.241287 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 02 11:48:55 crc kubenswrapper[4925]: I0202 11:48:55.074866 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 11:48:55 crc kubenswrapper[4925]: I0202 11:48:55.081977 4925 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.033604 4925 generic.go:334] "Generic (PLEG): container finished" podID="1315a531-ca20-494e-9273-dfa832b62744" containerID="4cf5134ea8bc9226a2250703fc585cd09993a5838777b5b7aaf3a00bf1bdcb24" exitCode=137 Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.033676 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96c5cb844-xrpsd" event={"ID":"1315a531-ca20-494e-9273-dfa832b62744","Type":"ContainerDied","Data":"4cf5134ea8bc9226a2250703fc585cd09993a5838777b5b7aaf3a00bf1bdcb24"} Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.033931 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96c5cb844-xrpsd" event={"ID":"1315a531-ca20-494e-9273-dfa832b62744","Type":"ContainerDied","Data":"a69f2658f791155a16b645b4f245ba9433b0ee4982bf9ab18d50f86e7e61c74f"} Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.033963 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a69f2658f791155a16b645b4f245ba9433b0ee4982bf9ab18d50f86e7e61c74f" Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.084273 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.266550 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1315a531-ca20-494e-9273-dfa832b62744-config-data\") pod \"1315a531-ca20-494e-9273-dfa832b62744\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.267385 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-horizon-secret-key\") pod \"1315a531-ca20-494e-9273-dfa832b62744\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.267421 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-horizon-tls-certs\") pod \"1315a531-ca20-494e-9273-dfa832b62744\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.267454 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1315a531-ca20-494e-9273-dfa832b62744-scripts\") pod \"1315a531-ca20-494e-9273-dfa832b62744\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.267590 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-combined-ca-bundle\") pod \"1315a531-ca20-494e-9273-dfa832b62744\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.267645 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4qhp\" (UniqueName: \"kubernetes.io/projected/1315a531-ca20-494e-9273-dfa832b62744-kube-api-access-v4qhp\") pod \"1315a531-ca20-494e-9273-dfa832b62744\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.267729 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1315a531-ca20-494e-9273-dfa832b62744-logs\") pod \"1315a531-ca20-494e-9273-dfa832b62744\" (UID: \"1315a531-ca20-494e-9273-dfa832b62744\") " Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.268339 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1315a531-ca20-494e-9273-dfa832b62744-logs" (OuterVolumeSpecName: "logs") pod "1315a531-ca20-494e-9273-dfa832b62744" (UID: "1315a531-ca20-494e-9273-dfa832b62744"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.268966 4925 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1315a531-ca20-494e-9273-dfa832b62744-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.274971 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1315a531-ca20-494e-9273-dfa832b62744" (UID: "1315a531-ca20-494e-9273-dfa832b62744"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.275462 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1315a531-ca20-494e-9273-dfa832b62744-kube-api-access-v4qhp" (OuterVolumeSpecName: "kube-api-access-v4qhp") pod "1315a531-ca20-494e-9273-dfa832b62744" (UID: "1315a531-ca20-494e-9273-dfa832b62744"). InnerVolumeSpecName "kube-api-access-v4qhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.294979 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1315a531-ca20-494e-9273-dfa832b62744-scripts" (OuterVolumeSpecName: "scripts") pod "1315a531-ca20-494e-9273-dfa832b62744" (UID: "1315a531-ca20-494e-9273-dfa832b62744"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.295373 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1315a531-ca20-494e-9273-dfa832b62744-config-data" (OuterVolumeSpecName: "config-data") pod "1315a531-ca20-494e-9273-dfa832b62744" (UID: "1315a531-ca20-494e-9273-dfa832b62744"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.299978 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1315a531-ca20-494e-9273-dfa832b62744" (UID: "1315a531-ca20-494e-9273-dfa832b62744"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.327571 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "1315a531-ca20-494e-9273-dfa832b62744" (UID: "1315a531-ca20-494e-9273-dfa832b62744"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.370575 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.370611 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4qhp\" (UniqueName: \"kubernetes.io/projected/1315a531-ca20-494e-9273-dfa832b62744-kube-api-access-v4qhp\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.370624 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1315a531-ca20-494e-9273-dfa832b62744-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.370632 4925 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.370641 4925 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1315a531-ca20-494e-9273-dfa832b62744-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:57 crc kubenswrapper[4925]: I0202 11:48:57.370649 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1315a531-ca20-494e-9273-dfa832b62744-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:58 crc kubenswrapper[4925]: I0202 11:48:58.042597 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-96c5cb844-xrpsd" Feb 02 11:48:58 crc kubenswrapper[4925]: I0202 11:48:58.075911 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-96c5cb844-xrpsd"] Feb 02 11:48:58 crc kubenswrapper[4925]: I0202 11:48:58.084538 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-96c5cb844-xrpsd"] Feb 02 11:48:58 crc kubenswrapper[4925]: I0202 11:48:58.601235 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 02 11:48:58 crc kubenswrapper[4925]: I0202 11:48:58.665164 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:48:58 crc kubenswrapper[4925]: E0202 11:48:58.665482 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:48:58 crc kubenswrapper[4925]: I0202 11:48:58.674475 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1315a531-ca20-494e-9273-dfa832b62744" path="/var/lib/kubelet/pods/1315a531-ca20-494e-9273-dfa832b62744/volumes" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.081921 4925 generic.go:334] "Generic (PLEG): container finished" podID="6b41c49d-63e3-462d-a541-092f967fa45e" containerID="5c2ce5e7fae0bf9679d7b8532e243129d31ee47c65b0559e442acc25731e9a84" exitCode=137 Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.082003 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b41c49d-63e3-462d-a541-092f967fa45e","Type":"ContainerDied","Data":"5c2ce5e7fae0bf9679d7b8532e243129d31ee47c65b0559e442acc25731e9a84"} Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.082433 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b41c49d-63e3-462d-a541-092f967fa45e","Type":"ContainerDied","Data":"06e7b2c37441965e0bcd7aa230da1387617c6ad81fb58d08b38b2badb0169e04"} Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.082451 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06e7b2c37441965e0bcd7aa230da1387617c6ad81fb58d08b38b2badb0169e04" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.147263 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.244265 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b41c49d-63e3-462d-a541-092f967fa45e-run-httpd\") pod \"6b41c49d-63e3-462d-a541-092f967fa45e\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.244409 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b41c49d-63e3-462d-a541-092f967fa45e-log-httpd\") pod \"6b41c49d-63e3-462d-a541-092f967fa45e\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.244444 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-scripts\") pod \"6b41c49d-63e3-462d-a541-092f967fa45e\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.244503 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-ceilometer-tls-certs\") pod \"6b41c49d-63e3-462d-a541-092f967fa45e\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.244522 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-combined-ca-bundle\") pod \"6b41c49d-63e3-462d-a541-092f967fa45e\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.244553 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-config-data\") pod \"6b41c49d-63e3-462d-a541-092f967fa45e\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.244635 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-sg-core-conf-yaml\") pod \"6b41c49d-63e3-462d-a541-092f967fa45e\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.244741 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6r4z\" (UniqueName: \"kubernetes.io/projected/6b41c49d-63e3-462d-a541-092f967fa45e-kube-api-access-h6r4z\") pod \"6b41c49d-63e3-462d-a541-092f967fa45e\" (UID: \"6b41c49d-63e3-462d-a541-092f967fa45e\") " Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.244916 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b41c49d-63e3-462d-a541-092f967fa45e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6b41c49d-63e3-462d-a541-092f967fa45e" (UID: "6b41c49d-63e3-462d-a541-092f967fa45e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.245114 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b41c49d-63e3-462d-a541-092f967fa45e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6b41c49d-63e3-462d-a541-092f967fa45e" (UID: "6b41c49d-63e3-462d-a541-092f967fa45e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.245377 4925 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b41c49d-63e3-462d-a541-092f967fa45e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.245391 4925 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b41c49d-63e3-462d-a541-092f967fa45e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.250467 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-scripts" (OuterVolumeSpecName: "scripts") pod "6b41c49d-63e3-462d-a541-092f967fa45e" (UID: "6b41c49d-63e3-462d-a541-092f967fa45e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.251639 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b41c49d-63e3-462d-a541-092f967fa45e-kube-api-access-h6r4z" (OuterVolumeSpecName: "kube-api-access-h6r4z") pod "6b41c49d-63e3-462d-a541-092f967fa45e" (UID: "6b41c49d-63e3-462d-a541-092f967fa45e"). InnerVolumeSpecName "kube-api-access-h6r4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.273968 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6b41c49d-63e3-462d-a541-092f967fa45e" (UID: "6b41c49d-63e3-462d-a541-092f967fa45e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.299643 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6b41c49d-63e3-462d-a541-092f967fa45e" (UID: "6b41c49d-63e3-462d-a541-092f967fa45e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.321728 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b41c49d-63e3-462d-a541-092f967fa45e" (UID: "6b41c49d-63e3-462d-a541-092f967fa45e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.343981 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-config-data" (OuterVolumeSpecName: "config-data") pod "6b41c49d-63e3-462d-a541-092f967fa45e" (UID: "6b41c49d-63e3-462d-a541-092f967fa45e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.347275 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6r4z\" (UniqueName: \"kubernetes.io/projected/6b41c49d-63e3-462d-a541-092f967fa45e-kube-api-access-h6r4z\") on node \"crc\" DevicePath \"\"" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.347305 4925 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.347315 4925 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.347323 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.347332 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:49:01 crc kubenswrapper[4925]: I0202 11:49:01.347342 4925 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b41c49d-63e3-462d-a541-092f967fa45e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.089568 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.122775 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.132648 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.145314 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:49:02 crc kubenswrapper[4925]: E0202 11:49:02.145667 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1315a531-ca20-494e-9273-dfa832b62744" containerName="horizon" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.145684 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="1315a531-ca20-494e-9273-dfa832b62744" containerName="horizon" Feb 02 11:49:02 crc kubenswrapper[4925]: E0202 11:49:02.145697 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" containerName="ceilometer-notification-agent" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.145704 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" containerName="ceilometer-notification-agent" Feb 02 11:49:02 crc kubenswrapper[4925]: E0202 11:49:02.145714 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" containerName="sg-core" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.145721 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" containerName="sg-core" Feb 02 11:49:02 crc kubenswrapper[4925]: E0202 11:49:02.145731 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" containerName="proxy-httpd" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.145757 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" containerName="proxy-httpd" Feb 02 11:49:02 crc kubenswrapper[4925]: E0202 11:49:02.145783 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" containerName="ceilometer-central-agent" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.145802 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" containerName="ceilometer-central-agent" Feb 02 11:49:02 crc kubenswrapper[4925]: E0202 11:49:02.145822 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1315a531-ca20-494e-9273-dfa832b62744" containerName="horizon-log" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.145828 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="1315a531-ca20-494e-9273-dfa832b62744" containerName="horizon-log" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.145982 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" containerName="ceilometer-notification-agent" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.145995 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" containerName="ceilometer-central-agent" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.146007 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="1315a531-ca20-494e-9273-dfa832b62744" containerName="horizon-log" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.146021 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" containerName="sg-core" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.146032 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="1315a531-ca20-494e-9273-dfa832b62744" containerName="horizon" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.146044 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" containerName="proxy-httpd" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.147580 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.151758 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.151921 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.152137 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.169031 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.264163 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8803b0fe-f2e6-41bd-b2e8-b970178ff360-scripts\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.264353 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8803b0fe-f2e6-41bd-b2e8-b970178ff360-run-httpd\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.264608 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8803b0fe-f2e6-41bd-b2e8-b970178ff360-config-data\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.264652 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8803b0fe-f2e6-41bd-b2e8-b970178ff360-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.264764 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8803b0fe-f2e6-41bd-b2e8-b970178ff360-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.264819 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8803b0fe-f2e6-41bd-b2e8-b970178ff360-log-httpd\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.264875 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snk7w\" (UniqueName: \"kubernetes.io/projected/8803b0fe-f2e6-41bd-b2e8-b970178ff360-kube-api-access-snk7w\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.264940 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8803b0fe-f2e6-41bd-b2e8-b970178ff360-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.367169 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8803b0fe-f2e6-41bd-b2e8-b970178ff360-config-data\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.367227 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8803b0fe-f2e6-41bd-b2e8-b970178ff360-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.367298 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8803b0fe-f2e6-41bd-b2e8-b970178ff360-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.367323 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8803b0fe-f2e6-41bd-b2e8-b970178ff360-log-httpd\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.367357 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snk7w\" (UniqueName: \"kubernetes.io/projected/8803b0fe-f2e6-41bd-b2e8-b970178ff360-kube-api-access-snk7w\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.367396 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8803b0fe-f2e6-41bd-b2e8-b970178ff360-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.367416 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8803b0fe-f2e6-41bd-b2e8-b970178ff360-scripts\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.367453 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8803b0fe-f2e6-41bd-b2e8-b970178ff360-run-httpd\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.368226 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8803b0fe-f2e6-41bd-b2e8-b970178ff360-log-httpd\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.368505 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8803b0fe-f2e6-41bd-b2e8-b970178ff360-run-httpd\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.371758 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8803b0fe-f2e6-41bd-b2e8-b970178ff360-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.371931 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8803b0fe-f2e6-41bd-b2e8-b970178ff360-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.372234 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8803b0fe-f2e6-41bd-b2e8-b970178ff360-scripts\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.373845 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8803b0fe-f2e6-41bd-b2e8-b970178ff360-config-data\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.380939 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8803b0fe-f2e6-41bd-b2e8-b970178ff360-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.384234 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snk7w\" (UniqueName: \"kubernetes.io/projected/8803b0fe-f2e6-41bd-b2e8-b970178ff360-kube-api-access-snk7w\") pod \"ceilometer-0\" (UID: \"8803b0fe-f2e6-41bd-b2e8-b970178ff360\") " pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.467741 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.676976 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b41c49d-63e3-462d-a541-092f967fa45e" path="/var/lib/kubelet/pods/6b41c49d-63e3-462d-a541-092f967fa45e/volumes" Feb 02 11:49:02 crc kubenswrapper[4925]: I0202 11:49:02.951291 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:49:03 crc kubenswrapper[4925]: I0202 11:49:03.104472 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8803b0fe-f2e6-41bd-b2e8-b970178ff360","Type":"ContainerStarted","Data":"5aa1bdcda331becc344e538c6045e5d05d4e7c274c0f5e341265a6b338a2ff77"} Feb 02 11:49:04 crc kubenswrapper[4925]: I0202 11:49:04.125006 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8803b0fe-f2e6-41bd-b2e8-b970178ff360","Type":"ContainerStarted","Data":"ee5976a335df62396d68056a4fb948b3d474f29b6aadebd36fccae690f31fb9c"} Feb 02 11:49:04 crc kubenswrapper[4925]: I0202 11:49:04.817339 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 02 11:49:05 crc kubenswrapper[4925]: I0202 11:49:05.141068 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8803b0fe-f2e6-41bd-b2e8-b970178ff360","Type":"ContainerStarted","Data":"b3698adbcf771b4823b292d37b26c7123c247c5b5974580a59f12d6835a9df62"} Feb 02 11:49:06 crc kubenswrapper[4925]: I0202 11:49:06.153044 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8803b0fe-f2e6-41bd-b2e8-b970178ff360","Type":"ContainerStarted","Data":"7ab524a6c27e7c74bc73fb6d28f3e27742d5637f77b18d7fa2685249eeefae41"} Feb 02 11:49:08 crc kubenswrapper[4925]: I0202 11:49:08.185121 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8803b0fe-f2e6-41bd-b2e8-b970178ff360","Type":"ContainerStarted","Data":"00b88f993b477e70f8082f3690c3e43cd5eafb2a3a99661fd2be51c4509d812f"} Feb 02 11:49:08 crc kubenswrapper[4925]: I0202 11:49:08.185543 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 11:49:08 crc kubenswrapper[4925]: I0202 11:49:08.222025 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.121882795 podStartE2EDuration="6.222002687s" podCreationTimestamp="2026-02-02 11:49:02 +0000 UTC" firstStartedPulling="2026-02-02 11:49:02.952355117 +0000 UTC m=+3119.956604119" lastFinishedPulling="2026-02-02 11:49:07.052475049 +0000 UTC m=+3124.056724011" observedRunningTime="2026-02-02 11:49:08.212219862 +0000 UTC m=+3125.216468844" watchObservedRunningTime="2026-02-02 11:49:08.222002687 +0000 UTC m=+3125.226251649" Feb 02 11:49:10 crc kubenswrapper[4925]: I0202 11:49:10.133322 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 02 11:49:11 crc kubenswrapper[4925]: I0202 11:49:11.664786 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:49:11 crc kubenswrapper[4925]: E0202 11:49:11.665422 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:49:24 crc kubenswrapper[4925]: I0202 11:49:24.673143 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:49:24 crc kubenswrapper[4925]: E0202 11:49:24.673826 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:49:32 crc kubenswrapper[4925]: I0202 11:49:32.476834 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 11:49:37 crc kubenswrapper[4925]: I0202 11:49:37.664178 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:49:37 crc kubenswrapper[4925]: E0202 11:49:37.664904 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:49:49 crc kubenswrapper[4925]: I0202 11:49:49.664555 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:49:49 crc kubenswrapper[4925]: E0202 11:49:49.665480 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:50:00 crc kubenswrapper[4925]: I0202 11:50:00.665555 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:50:00 crc kubenswrapper[4925]: E0202 11:50:00.666498 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:50:14 crc kubenswrapper[4925]: I0202 11:50:14.674884 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:50:14 crc kubenswrapper[4925]: E0202 11:50:14.676043 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.479190 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.482174 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.484268 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bjkfm" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.485220 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.485303 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.485474 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.504615 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.603311 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.603409 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.603484 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.603538 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll8f5\" (UniqueName: \"kubernetes.io/projected/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-kube-api-access-ll8f5\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.603841 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.603974 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-config-data\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.604067 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.604306 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.604406 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.706386 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.706430 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-config-data\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.706455 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.706504 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.706528 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.706564 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.706596 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.706640 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.706665 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll8f5\" (UniqueName: \"kubernetes.io/projected/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-kube-api-access-ll8f5\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.706846 4925 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.707192 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.707460 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.708344 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.709931 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-config-data\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.712305 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.713907 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.715440 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.724169 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll8f5\" (UniqueName: \"kubernetes.io/projected/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-kube-api-access-ll8f5\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.733325 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " pod="openstack/tempest-tests-tempest" Feb 02 11:50:19 crc kubenswrapper[4925]: I0202 11:50:19.831840 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 11:50:20 crc kubenswrapper[4925]: I0202 11:50:20.305922 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 11:50:20 crc kubenswrapper[4925]: I0202 11:50:20.863635 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"7390b503-a9bf-41e3-9506-1f63b8ad6d7d","Type":"ContainerStarted","Data":"45f6da8325c0cbb87e6a61d492f881aac25bfbd40ec411b4085fcb4d7d131b09"} Feb 02 11:50:29 crc kubenswrapper[4925]: I0202 11:50:29.664633 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:50:29 crc kubenswrapper[4925]: E0202 11:50:29.665500 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:50:43 crc kubenswrapper[4925]: I0202 11:50:43.664053 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:50:43 crc kubenswrapper[4925]: E0202 11:50:43.665045 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:50:58 crc kubenswrapper[4925]: I0202 11:50:58.665610 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:50:58 crc kubenswrapper[4925]: E0202 11:50:58.666939 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:51:02 crc kubenswrapper[4925]: E0202 11:51:02.003205 4925 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 02 11:51:02 crc kubenswrapper[4925]: E0202 11:51:02.003997 4925 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ll8f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(7390b503-a9bf-41e3-9506-1f63b8ad6d7d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:51:02 crc kubenswrapper[4925]: E0202 11:51:02.005360 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="7390b503-a9bf-41e3-9506-1f63b8ad6d7d" Feb 02 11:51:02 crc kubenswrapper[4925]: E0202 11:51:02.257040 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="7390b503-a9bf-41e3-9506-1f63b8ad6d7d" Feb 02 11:51:12 crc kubenswrapper[4925]: I0202 11:51:12.664118 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:51:12 crc kubenswrapper[4925]: E0202 11:51:12.665033 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:51:15 crc kubenswrapper[4925]: I0202 11:51:15.330628 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 02 11:51:17 crc kubenswrapper[4925]: I0202 11:51:17.375478 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"7390b503-a9bf-41e3-9506-1f63b8ad6d7d","Type":"ContainerStarted","Data":"41a7a6c19be0ec5ec82dcdcc881aeff0404c9a2ea076215a4793f01112923dc7"} Feb 02 11:51:17 crc kubenswrapper[4925]: I0202 11:51:17.406189 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.397685769 podStartE2EDuration="59.406169393s" podCreationTimestamp="2026-02-02 11:50:18 +0000 UTC" firstStartedPulling="2026-02-02 11:50:20.317292666 +0000 UTC m=+3197.321541628" lastFinishedPulling="2026-02-02 11:51:15.32577629 +0000 UTC m=+3252.330025252" observedRunningTime="2026-02-02 11:51:17.394862729 +0000 UTC m=+3254.399111721" watchObservedRunningTime="2026-02-02 11:51:17.406169393 +0000 UTC m=+3254.410418365" Feb 02 11:51:23 crc kubenswrapper[4925]: I0202 11:51:23.664716 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:51:23 crc kubenswrapper[4925]: E0202 11:51:23.665719 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:51:35 crc kubenswrapper[4925]: I0202 11:51:35.663891 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:51:35 crc kubenswrapper[4925]: E0202 11:51:35.664691 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:51:46 crc kubenswrapper[4925]: I0202 11:51:46.664464 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:51:46 crc kubenswrapper[4925]: E0202 11:51:46.665436 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:52:00 crc kubenswrapper[4925]: I0202 11:52:00.664596 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:52:00 crc kubenswrapper[4925]: E0202 11:52:00.666765 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:52:13 crc kubenswrapper[4925]: I0202 11:52:13.664027 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:52:13 crc kubenswrapper[4925]: E0202 11:52:13.664910 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:52:24 crc kubenswrapper[4925]: I0202 11:52:24.673677 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:52:24 crc kubenswrapper[4925]: E0202 11:52:24.678029 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:52:35 crc kubenswrapper[4925]: I0202 11:52:35.663921 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:52:35 crc kubenswrapper[4925]: E0202 11:52:35.664768 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:52:49 crc kubenswrapper[4925]: I0202 11:52:49.664435 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:52:50 crc kubenswrapper[4925]: I0202 11:52:50.247667 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"1a55d69e8ca5c8a97f431423e99f4bd876e42a2de45199e51dd90540b909634e"} Feb 02 11:53:42 crc kubenswrapper[4925]: I0202 11:53:42.240054 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bn2xg"] Feb 02 11:53:42 crc kubenswrapper[4925]: I0202 11:53:42.244440 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bn2xg" Feb 02 11:53:42 crc kubenswrapper[4925]: I0202 11:53:42.249121 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bn2xg"] Feb 02 11:53:42 crc kubenswrapper[4925]: I0202 11:53:42.332575 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6431cd88-b452-496a-b203-ba8490f4e3b3-utilities\") pod \"redhat-marketplace-bn2xg\" (UID: \"6431cd88-b452-496a-b203-ba8490f4e3b3\") " pod="openshift-marketplace/redhat-marketplace-bn2xg" Feb 02 11:53:42 crc kubenswrapper[4925]: I0202 11:53:42.332650 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6431cd88-b452-496a-b203-ba8490f4e3b3-catalog-content\") pod \"redhat-marketplace-bn2xg\" (UID: \"6431cd88-b452-496a-b203-ba8490f4e3b3\") " pod="openshift-marketplace/redhat-marketplace-bn2xg" Feb 02 11:53:42 crc kubenswrapper[4925]: I0202 11:53:42.332760 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h78hq\" (UniqueName: \"kubernetes.io/projected/6431cd88-b452-496a-b203-ba8490f4e3b3-kube-api-access-h78hq\") pod \"redhat-marketplace-bn2xg\" (UID: \"6431cd88-b452-496a-b203-ba8490f4e3b3\") " pod="openshift-marketplace/redhat-marketplace-bn2xg" Feb 02 11:53:42 crc kubenswrapper[4925]: I0202 11:53:42.434007 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6431cd88-b452-496a-b203-ba8490f4e3b3-catalog-content\") pod \"redhat-marketplace-bn2xg\" (UID: \"6431cd88-b452-496a-b203-ba8490f4e3b3\") " pod="openshift-marketplace/redhat-marketplace-bn2xg" Feb 02 11:53:42 crc kubenswrapper[4925]: I0202 11:53:42.434147 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h78hq\" (UniqueName: \"kubernetes.io/projected/6431cd88-b452-496a-b203-ba8490f4e3b3-kube-api-access-h78hq\") pod \"redhat-marketplace-bn2xg\" (UID: \"6431cd88-b452-496a-b203-ba8490f4e3b3\") " pod="openshift-marketplace/redhat-marketplace-bn2xg" Feb 02 11:53:42 crc kubenswrapper[4925]: I0202 11:53:42.434254 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6431cd88-b452-496a-b203-ba8490f4e3b3-utilities\") pod \"redhat-marketplace-bn2xg\" (UID: \"6431cd88-b452-496a-b203-ba8490f4e3b3\") " pod="openshift-marketplace/redhat-marketplace-bn2xg" Feb 02 11:53:42 crc kubenswrapper[4925]: I0202 11:53:42.434688 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6431cd88-b452-496a-b203-ba8490f4e3b3-catalog-content\") pod \"redhat-marketplace-bn2xg\" (UID: \"6431cd88-b452-496a-b203-ba8490f4e3b3\") " pod="openshift-marketplace/redhat-marketplace-bn2xg" Feb 02 11:53:42 crc kubenswrapper[4925]: I0202 11:53:42.434739 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6431cd88-b452-496a-b203-ba8490f4e3b3-utilities\") pod \"redhat-marketplace-bn2xg\" (UID: \"6431cd88-b452-496a-b203-ba8490f4e3b3\") " pod="openshift-marketplace/redhat-marketplace-bn2xg" Feb 02 11:53:42 crc kubenswrapper[4925]: I0202 11:53:42.454757 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h78hq\" (UniqueName: \"kubernetes.io/projected/6431cd88-b452-496a-b203-ba8490f4e3b3-kube-api-access-h78hq\") pod \"redhat-marketplace-bn2xg\" (UID: \"6431cd88-b452-496a-b203-ba8490f4e3b3\") " pod="openshift-marketplace/redhat-marketplace-bn2xg" Feb 02 11:53:42 crc kubenswrapper[4925]: I0202 11:53:42.571585 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bn2xg" Feb 02 11:53:43 crc kubenswrapper[4925]: I0202 11:53:43.156659 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bn2xg"] Feb 02 11:53:43 crc kubenswrapper[4925]: W0202 11:53:43.158648 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6431cd88_b452_496a_b203_ba8490f4e3b3.slice/crio-20b8ee4a49e3401825bd77fba798a1398fd242832ba65438a37c95c4f4c6d72f WatchSource:0}: Error finding container 20b8ee4a49e3401825bd77fba798a1398fd242832ba65438a37c95c4f4c6d72f: Status 404 returned error can't find the container with id 20b8ee4a49e3401825bd77fba798a1398fd242832ba65438a37c95c4f4c6d72f Feb 02 11:53:43 crc kubenswrapper[4925]: I0202 11:53:43.711231 4925 generic.go:334] "Generic (PLEG): container finished" podID="6431cd88-b452-496a-b203-ba8490f4e3b3" containerID="9b758d8031e345fbc49146d2ddac912ef22ec273becdb4a1b9cbdf68a35b0de0" exitCode=0 Feb 02 11:53:43 crc kubenswrapper[4925]: I0202 11:53:43.711341 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bn2xg" event={"ID":"6431cd88-b452-496a-b203-ba8490f4e3b3","Type":"ContainerDied","Data":"9b758d8031e345fbc49146d2ddac912ef22ec273becdb4a1b9cbdf68a35b0de0"} Feb 02 11:53:43 crc kubenswrapper[4925]: I0202 11:53:43.711531 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bn2xg" event={"ID":"6431cd88-b452-496a-b203-ba8490f4e3b3","Type":"ContainerStarted","Data":"20b8ee4a49e3401825bd77fba798a1398fd242832ba65438a37c95c4f4c6d72f"} Feb 02 11:53:43 crc kubenswrapper[4925]: I0202 11:53:43.714959 4925 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:53:45 crc kubenswrapper[4925]: I0202 11:53:45.731240 4925 generic.go:334] "Generic (PLEG): container finished" podID="6431cd88-b452-496a-b203-ba8490f4e3b3" containerID="14ad042708e954e21bb99dea8a6afc04f2e9033ee43e99ea387da2a819f19a10" exitCode=0 Feb 02 11:53:45 crc kubenswrapper[4925]: I0202 11:53:45.731332 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bn2xg" event={"ID":"6431cd88-b452-496a-b203-ba8490f4e3b3","Type":"ContainerDied","Data":"14ad042708e954e21bb99dea8a6afc04f2e9033ee43e99ea387da2a819f19a10"} Feb 02 11:53:46 crc kubenswrapper[4925]: I0202 11:53:46.745642 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bn2xg" event={"ID":"6431cd88-b452-496a-b203-ba8490f4e3b3","Type":"ContainerStarted","Data":"3eb808aa78d3c2aecb99aa96c4c5b810fe6e7d6f14feea619df05ed29d860885"} Feb 02 11:53:46 crc kubenswrapper[4925]: I0202 11:53:46.773220 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bn2xg" podStartSLOduration=2.345774319 podStartE2EDuration="4.773201708s" podCreationTimestamp="2026-02-02 11:53:42 +0000 UTC" firstStartedPulling="2026-02-02 11:53:43.714769993 +0000 UTC m=+3400.719018955" lastFinishedPulling="2026-02-02 11:53:46.142197342 +0000 UTC m=+3403.146446344" observedRunningTime="2026-02-02 11:53:46.767547585 +0000 UTC m=+3403.771796547" watchObservedRunningTime="2026-02-02 11:53:46.773201708 +0000 UTC m=+3403.777450670" Feb 02 11:53:52 crc kubenswrapper[4925]: I0202 11:53:52.571907 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bn2xg" Feb 02 11:53:52 crc kubenswrapper[4925]: I0202 11:53:52.572545 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bn2xg" Feb 02 11:53:52 crc kubenswrapper[4925]: I0202 11:53:52.617424 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bn2xg" Feb 02 11:53:52 crc kubenswrapper[4925]: I0202 11:53:52.847291 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bn2xg" Feb 02 11:53:52 crc kubenswrapper[4925]: I0202 11:53:52.912222 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bn2xg"] Feb 02 11:53:54 crc kubenswrapper[4925]: I0202 11:53:54.815822 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bn2xg" podUID="6431cd88-b452-496a-b203-ba8490f4e3b3" containerName="registry-server" containerID="cri-o://3eb808aa78d3c2aecb99aa96c4c5b810fe6e7d6f14feea619df05ed29d860885" gracePeriod=2 Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.332593 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bn2xg" Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.486616 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h78hq\" (UniqueName: \"kubernetes.io/projected/6431cd88-b452-496a-b203-ba8490f4e3b3-kube-api-access-h78hq\") pod \"6431cd88-b452-496a-b203-ba8490f4e3b3\" (UID: \"6431cd88-b452-496a-b203-ba8490f4e3b3\") " Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.487175 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6431cd88-b452-496a-b203-ba8490f4e3b3-utilities\") pod \"6431cd88-b452-496a-b203-ba8490f4e3b3\" (UID: \"6431cd88-b452-496a-b203-ba8490f4e3b3\") " Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.487233 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6431cd88-b452-496a-b203-ba8490f4e3b3-catalog-content\") pod \"6431cd88-b452-496a-b203-ba8490f4e3b3\" (UID: \"6431cd88-b452-496a-b203-ba8490f4e3b3\") " Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.488943 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6431cd88-b452-496a-b203-ba8490f4e3b3-utilities" (OuterVolumeSpecName: "utilities") pod "6431cd88-b452-496a-b203-ba8490f4e3b3" (UID: "6431cd88-b452-496a-b203-ba8490f4e3b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.494302 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6431cd88-b452-496a-b203-ba8490f4e3b3-kube-api-access-h78hq" (OuterVolumeSpecName: "kube-api-access-h78hq") pod "6431cd88-b452-496a-b203-ba8490f4e3b3" (UID: "6431cd88-b452-496a-b203-ba8490f4e3b3"). InnerVolumeSpecName "kube-api-access-h78hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.509800 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6431cd88-b452-496a-b203-ba8490f4e3b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6431cd88-b452-496a-b203-ba8490f4e3b3" (UID: "6431cd88-b452-496a-b203-ba8490f4e3b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.589440 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h78hq\" (UniqueName: \"kubernetes.io/projected/6431cd88-b452-496a-b203-ba8490f4e3b3-kube-api-access-h78hq\") on node \"crc\" DevicePath \"\"" Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.589492 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6431cd88-b452-496a-b203-ba8490f4e3b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.589503 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6431cd88-b452-496a-b203-ba8490f4e3b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.825655 4925 generic.go:334] "Generic (PLEG): container finished" podID="6431cd88-b452-496a-b203-ba8490f4e3b3" containerID="3eb808aa78d3c2aecb99aa96c4c5b810fe6e7d6f14feea619df05ed29d860885" exitCode=0 Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.825724 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bn2xg" Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.825723 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bn2xg" event={"ID":"6431cd88-b452-496a-b203-ba8490f4e3b3","Type":"ContainerDied","Data":"3eb808aa78d3c2aecb99aa96c4c5b810fe6e7d6f14feea619df05ed29d860885"} Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.825878 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bn2xg" event={"ID":"6431cd88-b452-496a-b203-ba8490f4e3b3","Type":"ContainerDied","Data":"20b8ee4a49e3401825bd77fba798a1398fd242832ba65438a37c95c4f4c6d72f"} Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.825898 4925 scope.go:117] "RemoveContainer" containerID="3eb808aa78d3c2aecb99aa96c4c5b810fe6e7d6f14feea619df05ed29d860885" Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.850432 4925 scope.go:117] "RemoveContainer" containerID="14ad042708e954e21bb99dea8a6afc04f2e9033ee43e99ea387da2a819f19a10" Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.862377 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bn2xg"] Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.871682 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bn2xg"] Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.871932 4925 scope.go:117] "RemoveContainer" containerID="9b758d8031e345fbc49146d2ddac912ef22ec273becdb4a1b9cbdf68a35b0de0" Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.916101 4925 scope.go:117] "RemoveContainer" containerID="3eb808aa78d3c2aecb99aa96c4c5b810fe6e7d6f14feea619df05ed29d860885" Feb 02 11:53:55 crc kubenswrapper[4925]: E0202 11:53:55.916754 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eb808aa78d3c2aecb99aa96c4c5b810fe6e7d6f14feea619df05ed29d860885\": container with ID starting with 3eb808aa78d3c2aecb99aa96c4c5b810fe6e7d6f14feea619df05ed29d860885 not found: ID does not exist" containerID="3eb808aa78d3c2aecb99aa96c4c5b810fe6e7d6f14feea619df05ed29d860885" Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.916810 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eb808aa78d3c2aecb99aa96c4c5b810fe6e7d6f14feea619df05ed29d860885"} err="failed to get container status \"3eb808aa78d3c2aecb99aa96c4c5b810fe6e7d6f14feea619df05ed29d860885\": rpc error: code = NotFound desc = could not find container \"3eb808aa78d3c2aecb99aa96c4c5b810fe6e7d6f14feea619df05ed29d860885\": container with ID starting with 3eb808aa78d3c2aecb99aa96c4c5b810fe6e7d6f14feea619df05ed29d860885 not found: ID does not exist" Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.916844 4925 scope.go:117] "RemoveContainer" containerID="14ad042708e954e21bb99dea8a6afc04f2e9033ee43e99ea387da2a819f19a10" Feb 02 11:53:55 crc kubenswrapper[4925]: E0202 11:53:55.917318 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14ad042708e954e21bb99dea8a6afc04f2e9033ee43e99ea387da2a819f19a10\": container with ID starting with 14ad042708e954e21bb99dea8a6afc04f2e9033ee43e99ea387da2a819f19a10 not found: ID does not exist" containerID="14ad042708e954e21bb99dea8a6afc04f2e9033ee43e99ea387da2a819f19a10" Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.917367 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14ad042708e954e21bb99dea8a6afc04f2e9033ee43e99ea387da2a819f19a10"} err="failed to get container status \"14ad042708e954e21bb99dea8a6afc04f2e9033ee43e99ea387da2a819f19a10\": rpc error: code = NotFound desc = could not find container \"14ad042708e954e21bb99dea8a6afc04f2e9033ee43e99ea387da2a819f19a10\": container with ID starting with 14ad042708e954e21bb99dea8a6afc04f2e9033ee43e99ea387da2a819f19a10 not found: ID does not exist" Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.917409 4925 scope.go:117] "RemoveContainer" containerID="9b758d8031e345fbc49146d2ddac912ef22ec273becdb4a1b9cbdf68a35b0de0" Feb 02 11:53:55 crc kubenswrapper[4925]: E0202 11:53:55.917847 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b758d8031e345fbc49146d2ddac912ef22ec273becdb4a1b9cbdf68a35b0de0\": container with ID starting with 9b758d8031e345fbc49146d2ddac912ef22ec273becdb4a1b9cbdf68a35b0de0 not found: ID does not exist" containerID="9b758d8031e345fbc49146d2ddac912ef22ec273becdb4a1b9cbdf68a35b0de0" Feb 02 11:53:55 crc kubenswrapper[4925]: I0202 11:53:55.917874 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b758d8031e345fbc49146d2ddac912ef22ec273becdb4a1b9cbdf68a35b0de0"} err="failed to get container status \"9b758d8031e345fbc49146d2ddac912ef22ec273becdb4a1b9cbdf68a35b0de0\": rpc error: code = NotFound desc = could not find container \"9b758d8031e345fbc49146d2ddac912ef22ec273becdb4a1b9cbdf68a35b0de0\": container with ID starting with 9b758d8031e345fbc49146d2ddac912ef22ec273becdb4a1b9cbdf68a35b0de0 not found: ID does not exist" Feb 02 11:53:56 crc kubenswrapper[4925]: I0202 11:53:56.675914 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6431cd88-b452-496a-b203-ba8490f4e3b3" path="/var/lib/kubelet/pods/6431cd88-b452-496a-b203-ba8490f4e3b3/volumes" Feb 02 11:54:30 crc kubenswrapper[4925]: I0202 11:54:30.034896 4925 scope.go:117] "RemoveContainer" containerID="4cf5134ea8bc9226a2250703fc585cd09993a5838777b5b7aaf3a00bf1bdcb24" Feb 02 11:54:30 crc kubenswrapper[4925]: I0202 11:54:30.066302 4925 scope.go:117] "RemoveContainer" containerID="5c31307d753cc16f69dd3ce1102c1c570ce11afcbc06220d80d0f4c652ea4469" Feb 02 11:54:30 crc kubenswrapper[4925]: I0202 11:54:30.101137 4925 scope.go:117] "RemoveContainer" containerID="2a2caab84eef03021831e24f792ffa3c52397dade541be444489851fc252f99d" Feb 02 11:54:30 crc kubenswrapper[4925]: I0202 11:54:30.266253 4925 scope.go:117] "RemoveContainer" containerID="133c424f847c93efe5aa25ab0b9a5a9a4325808a37335b57e8c9e23891a8f4db" Feb 02 11:54:30 crc kubenswrapper[4925]: I0202 11:54:30.293529 4925 scope.go:117] "RemoveContainer" containerID="4be240fecdbecc856517261b7509c2c2ce831cfd31cb99d3aec0676437aec628" Feb 02 11:55:13 crc kubenswrapper[4925]: I0202 11:55:13.402241 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:55:13 crc kubenswrapper[4925]: I0202 11:55:13.402902 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:55:30 crc kubenswrapper[4925]: I0202 11:55:30.366275 4925 scope.go:117] "RemoveContainer" containerID="5c2ce5e7fae0bf9679d7b8532e243129d31ee47c65b0559e442acc25731e9a84" Feb 02 11:55:43 crc kubenswrapper[4925]: I0202 11:55:43.398244 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:55:43 crc kubenswrapper[4925]: I0202 11:55:43.399790 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:56:13 crc kubenswrapper[4925]: I0202 11:56:13.399008 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:56:13 crc kubenswrapper[4925]: I0202 11:56:13.399542 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:56:13 crc kubenswrapper[4925]: I0202 11:56:13.399590 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 11:56:13 crc kubenswrapper[4925]: I0202 11:56:13.400437 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a55d69e8ca5c8a97f431423e99f4bd876e42a2de45199e51dd90540b909634e"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:56:13 crc kubenswrapper[4925]: I0202 11:56:13.400493 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://1a55d69e8ca5c8a97f431423e99f4bd876e42a2de45199e51dd90540b909634e" gracePeriod=600 Feb 02 11:56:14 crc kubenswrapper[4925]: I0202 11:56:14.086513 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="1a55d69e8ca5c8a97f431423e99f4bd876e42a2de45199e51dd90540b909634e" exitCode=0 Feb 02 11:56:14 crc kubenswrapper[4925]: I0202 11:56:14.086570 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"1a55d69e8ca5c8a97f431423e99f4bd876e42a2de45199e51dd90540b909634e"} Feb 02 11:56:14 crc kubenswrapper[4925]: I0202 11:56:14.087120 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262"} Feb 02 11:56:14 crc kubenswrapper[4925]: I0202 11:56:14.087142 4925 scope.go:117] "RemoveContainer" containerID="0ff918b71edd65376fe4579f585bf07a98b9189abd600e6fa4baede0de625a44" Feb 02 11:57:41 crc kubenswrapper[4925]: I0202 11:57:41.033797 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-jvc5j"] Feb 02 11:57:41 crc kubenswrapper[4925]: I0202 11:57:41.040505 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-jvc5j"] Feb 02 11:57:42 crc kubenswrapper[4925]: I0202 11:57:42.029494 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-7be6-account-create-update-zxh47"] Feb 02 11:57:42 crc kubenswrapper[4925]: I0202 11:57:42.039146 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-7be6-account-create-update-zxh47"] Feb 02 11:57:42 crc kubenswrapper[4925]: I0202 11:57:42.675609 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ed10fe1-fca8-4488-97a4-15e007cba9a0" path="/var/lib/kubelet/pods/5ed10fe1-fca8-4488-97a4-15e007cba9a0/volumes" Feb 02 11:57:42 crc kubenswrapper[4925]: I0202 11:57:42.676384 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce8c3f23-4314-490d-9fa5-0754abe083a1" path="/var/lib/kubelet/pods/ce8c3f23-4314-490d-9fa5-0754abe083a1/volumes" Feb 02 11:58:13 crc kubenswrapper[4925]: I0202 11:58:13.398827 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:58:13 crc kubenswrapper[4925]: I0202 11:58:13.399554 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:58:14 crc kubenswrapper[4925]: I0202 11:58:14.037760 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-fnp7t"] Feb 02 11:58:14 crc kubenswrapper[4925]: I0202 11:58:14.046050 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-fnp7t"] Feb 02 11:58:14 crc kubenswrapper[4925]: I0202 11:58:14.675401 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b317877-90ba-4149-aa93-a74664be058d" path="/var/lib/kubelet/pods/2b317877-90ba-4149-aa93-a74664be058d/volumes" Feb 02 11:58:30 crc kubenswrapper[4925]: I0202 11:58:30.450605 4925 scope.go:117] "RemoveContainer" containerID="d433a9ed6e8e4ffc96912b0b19cd6d0d7ea944859130388cddd1214bd2c8154c" Feb 02 11:58:30 crc kubenswrapper[4925]: I0202 11:58:30.473658 4925 scope.go:117] "RemoveContainer" containerID="d4b39a28ad2b0d50c3909d23cec1f2fd2ec523de942218fe4ebd70a91efdd2ae" Feb 02 11:58:30 crc kubenswrapper[4925]: I0202 11:58:30.537196 4925 scope.go:117] "RemoveContainer" containerID="c61791d06afe45bcdd3c165e44cf44f79edb7f503237a8048f374dd783b2e562" Feb 02 11:58:31 crc kubenswrapper[4925]: I0202 11:58:31.358015 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x2n78"] Feb 02 11:58:31 crc kubenswrapper[4925]: E0202 11:58:31.358771 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6431cd88-b452-496a-b203-ba8490f4e3b3" containerName="extract-utilities" Feb 02 11:58:31 crc kubenswrapper[4925]: I0202 11:58:31.358795 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6431cd88-b452-496a-b203-ba8490f4e3b3" containerName="extract-utilities" Feb 02 11:58:31 crc kubenswrapper[4925]: E0202 11:58:31.358825 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6431cd88-b452-496a-b203-ba8490f4e3b3" containerName="extract-content" Feb 02 11:58:31 crc kubenswrapper[4925]: I0202 11:58:31.358834 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6431cd88-b452-496a-b203-ba8490f4e3b3" containerName="extract-content" Feb 02 11:58:31 crc kubenswrapper[4925]: E0202 11:58:31.358856 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6431cd88-b452-496a-b203-ba8490f4e3b3" containerName="registry-server" Feb 02 11:58:31 crc kubenswrapper[4925]: I0202 11:58:31.358863 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="6431cd88-b452-496a-b203-ba8490f4e3b3" containerName="registry-server" Feb 02 11:58:31 crc kubenswrapper[4925]: I0202 11:58:31.359041 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="6431cd88-b452-496a-b203-ba8490f4e3b3" containerName="registry-server" Feb 02 11:58:31 crc kubenswrapper[4925]: I0202 11:58:31.361559 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2n78" Feb 02 11:58:31 crc kubenswrapper[4925]: I0202 11:58:31.371512 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2n78"] Feb 02 11:58:31 crc kubenswrapper[4925]: I0202 11:58:31.453101 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4299\" (UniqueName: \"kubernetes.io/projected/9e414449-c1b9-4f0d-b698-cb6822c94e9b-kube-api-access-p4299\") pod \"community-operators-x2n78\" (UID: \"9e414449-c1b9-4f0d-b698-cb6822c94e9b\") " pod="openshift-marketplace/community-operators-x2n78" Feb 02 11:58:31 crc kubenswrapper[4925]: I0202 11:58:31.453266 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e414449-c1b9-4f0d-b698-cb6822c94e9b-catalog-content\") pod \"community-operators-x2n78\" (UID: \"9e414449-c1b9-4f0d-b698-cb6822c94e9b\") " pod="openshift-marketplace/community-operators-x2n78" Feb 02 11:58:31 crc kubenswrapper[4925]: I0202 11:58:31.453370 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e414449-c1b9-4f0d-b698-cb6822c94e9b-utilities\") pod \"community-operators-x2n78\" (UID: \"9e414449-c1b9-4f0d-b698-cb6822c94e9b\") " pod="openshift-marketplace/community-operators-x2n78" Feb 02 11:58:31 crc kubenswrapper[4925]: I0202 11:58:31.555034 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e414449-c1b9-4f0d-b698-cb6822c94e9b-catalog-content\") pod \"community-operators-x2n78\" (UID: \"9e414449-c1b9-4f0d-b698-cb6822c94e9b\") " pod="openshift-marketplace/community-operators-x2n78" Feb 02 11:58:31 crc kubenswrapper[4925]: I0202 11:58:31.555173 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e414449-c1b9-4f0d-b698-cb6822c94e9b-utilities\") pod \"community-operators-x2n78\" (UID: \"9e414449-c1b9-4f0d-b698-cb6822c94e9b\") " pod="openshift-marketplace/community-operators-x2n78" Feb 02 11:58:31 crc kubenswrapper[4925]: I0202 11:58:31.555205 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4299\" (UniqueName: \"kubernetes.io/projected/9e414449-c1b9-4f0d-b698-cb6822c94e9b-kube-api-access-p4299\") pod \"community-operators-x2n78\" (UID: \"9e414449-c1b9-4f0d-b698-cb6822c94e9b\") " pod="openshift-marketplace/community-operators-x2n78" Feb 02 11:58:31 crc kubenswrapper[4925]: I0202 11:58:31.555501 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e414449-c1b9-4f0d-b698-cb6822c94e9b-catalog-content\") pod \"community-operators-x2n78\" (UID: \"9e414449-c1b9-4f0d-b698-cb6822c94e9b\") " pod="openshift-marketplace/community-operators-x2n78" Feb 02 11:58:31 crc kubenswrapper[4925]: I0202 11:58:31.555758 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e414449-c1b9-4f0d-b698-cb6822c94e9b-utilities\") pod \"community-operators-x2n78\" (UID: \"9e414449-c1b9-4f0d-b698-cb6822c94e9b\") " pod="openshift-marketplace/community-operators-x2n78" Feb 02 11:58:31 crc kubenswrapper[4925]: I0202 11:58:31.577820 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4299\" (UniqueName: \"kubernetes.io/projected/9e414449-c1b9-4f0d-b698-cb6822c94e9b-kube-api-access-p4299\") pod \"community-operators-x2n78\" (UID: \"9e414449-c1b9-4f0d-b698-cb6822c94e9b\") " pod="openshift-marketplace/community-operators-x2n78" Feb 02 11:58:31 crc kubenswrapper[4925]: I0202 11:58:31.690438 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2n78" Feb 02 11:58:32 crc kubenswrapper[4925]: I0202 11:58:32.270260 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2n78"] Feb 02 11:58:33 crc kubenswrapper[4925]: I0202 11:58:33.265276 4925 generic.go:334] "Generic (PLEG): container finished" podID="9e414449-c1b9-4f0d-b698-cb6822c94e9b" containerID="d2aa286245075cdd5fc0b187ce6bfa93338c454ddb596f4ab24026df3ce7e59b" exitCode=0 Feb 02 11:58:33 crc kubenswrapper[4925]: I0202 11:58:33.265393 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2n78" event={"ID":"9e414449-c1b9-4f0d-b698-cb6822c94e9b","Type":"ContainerDied","Data":"d2aa286245075cdd5fc0b187ce6bfa93338c454ddb596f4ab24026df3ce7e59b"} Feb 02 11:58:33 crc kubenswrapper[4925]: I0202 11:58:33.265894 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2n78" event={"ID":"9e414449-c1b9-4f0d-b698-cb6822c94e9b","Type":"ContainerStarted","Data":"1231f5a892eeeb830cdcda7b3c140d8fe55ef051beaea29a3a419deaa8b52465"} Feb 02 11:58:34 crc kubenswrapper[4925]: I0202 11:58:34.274482 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2n78" event={"ID":"9e414449-c1b9-4f0d-b698-cb6822c94e9b","Type":"ContainerStarted","Data":"f5e335b7c4164fc1f53c789cc80bc2d6be09c4189bb1be6a80a88cfd80450fc6"} Feb 02 11:58:35 crc kubenswrapper[4925]: I0202 11:58:35.283782 4925 generic.go:334] "Generic (PLEG): container finished" podID="9e414449-c1b9-4f0d-b698-cb6822c94e9b" containerID="f5e335b7c4164fc1f53c789cc80bc2d6be09c4189bb1be6a80a88cfd80450fc6" exitCode=0 Feb 02 11:58:35 crc kubenswrapper[4925]: I0202 11:58:35.284166 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2n78" event={"ID":"9e414449-c1b9-4f0d-b698-cb6822c94e9b","Type":"ContainerDied","Data":"f5e335b7c4164fc1f53c789cc80bc2d6be09c4189bb1be6a80a88cfd80450fc6"} Feb 02 11:58:36 crc kubenswrapper[4925]: I0202 11:58:36.293550 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2n78" event={"ID":"9e414449-c1b9-4f0d-b698-cb6822c94e9b","Type":"ContainerStarted","Data":"2b495f4df53fa964d94a5645285a0fb2d4b376865f212f830891481498593691"} Feb 02 11:58:36 crc kubenswrapper[4925]: I0202 11:58:36.311448 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x2n78" podStartSLOduration=2.876419029 podStartE2EDuration="5.311431784s" podCreationTimestamp="2026-02-02 11:58:31 +0000 UTC" firstStartedPulling="2026-02-02 11:58:33.267360186 +0000 UTC m=+3690.271609158" lastFinishedPulling="2026-02-02 11:58:35.702372951 +0000 UTC m=+3692.706621913" observedRunningTime="2026-02-02 11:58:36.307640641 +0000 UTC m=+3693.311889613" watchObservedRunningTime="2026-02-02 11:58:36.311431784 +0000 UTC m=+3693.315680736" Feb 02 11:58:41 crc kubenswrapper[4925]: I0202 11:58:41.690675 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x2n78" Feb 02 11:58:41 crc kubenswrapper[4925]: I0202 11:58:41.691327 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x2n78" Feb 02 11:58:41 crc kubenswrapper[4925]: I0202 11:58:41.733275 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x2n78" Feb 02 11:58:42 crc kubenswrapper[4925]: I0202 11:58:42.380767 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x2n78" Feb 02 11:58:42 crc kubenswrapper[4925]: I0202 11:58:42.428973 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x2n78"] Feb 02 11:58:43 crc kubenswrapper[4925]: I0202 11:58:43.399254 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:58:43 crc kubenswrapper[4925]: I0202 11:58:43.399623 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:58:44 crc kubenswrapper[4925]: I0202 11:58:44.350404 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x2n78" podUID="9e414449-c1b9-4f0d-b698-cb6822c94e9b" containerName="registry-server" containerID="cri-o://2b495f4df53fa964d94a5645285a0fb2d4b376865f212f830891481498593691" gracePeriod=2 Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.027381 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2n78" Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.117210 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e414449-c1b9-4f0d-b698-cb6822c94e9b-utilities\") pod \"9e414449-c1b9-4f0d-b698-cb6822c94e9b\" (UID: \"9e414449-c1b9-4f0d-b698-cb6822c94e9b\") " Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.117268 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e414449-c1b9-4f0d-b698-cb6822c94e9b-catalog-content\") pod \"9e414449-c1b9-4f0d-b698-cb6822c94e9b\" (UID: \"9e414449-c1b9-4f0d-b698-cb6822c94e9b\") " Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.117337 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4299\" (UniqueName: \"kubernetes.io/projected/9e414449-c1b9-4f0d-b698-cb6822c94e9b-kube-api-access-p4299\") pod \"9e414449-c1b9-4f0d-b698-cb6822c94e9b\" (UID: \"9e414449-c1b9-4f0d-b698-cb6822c94e9b\") " Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.118550 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e414449-c1b9-4f0d-b698-cb6822c94e9b-utilities" (OuterVolumeSpecName: "utilities") pod "9e414449-c1b9-4f0d-b698-cb6822c94e9b" (UID: "9e414449-c1b9-4f0d-b698-cb6822c94e9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.125323 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e414449-c1b9-4f0d-b698-cb6822c94e9b-kube-api-access-p4299" (OuterVolumeSpecName: "kube-api-access-p4299") pod "9e414449-c1b9-4f0d-b698-cb6822c94e9b" (UID: "9e414449-c1b9-4f0d-b698-cb6822c94e9b"). InnerVolumeSpecName "kube-api-access-p4299". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.219126 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e414449-c1b9-4f0d-b698-cb6822c94e9b-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.219171 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4299\" (UniqueName: \"kubernetes.io/projected/9e414449-c1b9-4f0d-b698-cb6822c94e9b-kube-api-access-p4299\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.325516 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e414449-c1b9-4f0d-b698-cb6822c94e9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e414449-c1b9-4f0d-b698-cb6822c94e9b" (UID: "9e414449-c1b9-4f0d-b698-cb6822c94e9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.358988 4925 generic.go:334] "Generic (PLEG): container finished" podID="9e414449-c1b9-4f0d-b698-cb6822c94e9b" containerID="2b495f4df53fa964d94a5645285a0fb2d4b376865f212f830891481498593691" exitCode=0 Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.359033 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2n78" event={"ID":"9e414449-c1b9-4f0d-b698-cb6822c94e9b","Type":"ContainerDied","Data":"2b495f4df53fa964d94a5645285a0fb2d4b376865f212f830891481498593691"} Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.359035 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2n78" Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.359060 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2n78" event={"ID":"9e414449-c1b9-4f0d-b698-cb6822c94e9b","Type":"ContainerDied","Data":"1231f5a892eeeb830cdcda7b3c140d8fe55ef051beaea29a3a419deaa8b52465"} Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.359095 4925 scope.go:117] "RemoveContainer" containerID="2b495f4df53fa964d94a5645285a0fb2d4b376865f212f830891481498593691" Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.391243 4925 scope.go:117] "RemoveContainer" containerID="f5e335b7c4164fc1f53c789cc80bc2d6be09c4189bb1be6a80a88cfd80450fc6" Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.394861 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x2n78"] Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.406982 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x2n78"] Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.420118 4925 scope.go:117] "RemoveContainer" containerID="d2aa286245075cdd5fc0b187ce6bfa93338c454ddb596f4ab24026df3ce7e59b" Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.423616 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e414449-c1b9-4f0d-b698-cb6822c94e9b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.462200 4925 scope.go:117] "RemoveContainer" containerID="2b495f4df53fa964d94a5645285a0fb2d4b376865f212f830891481498593691" Feb 02 11:58:45 crc kubenswrapper[4925]: E0202 11:58:45.462577 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b495f4df53fa964d94a5645285a0fb2d4b376865f212f830891481498593691\": container with ID starting with 2b495f4df53fa964d94a5645285a0fb2d4b376865f212f830891481498593691 not found: ID does not exist" containerID="2b495f4df53fa964d94a5645285a0fb2d4b376865f212f830891481498593691" Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.462608 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b495f4df53fa964d94a5645285a0fb2d4b376865f212f830891481498593691"} err="failed to get container status \"2b495f4df53fa964d94a5645285a0fb2d4b376865f212f830891481498593691\": rpc error: code = NotFound desc = could not find container \"2b495f4df53fa964d94a5645285a0fb2d4b376865f212f830891481498593691\": container with ID starting with 2b495f4df53fa964d94a5645285a0fb2d4b376865f212f830891481498593691 not found: ID does not exist" Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.462628 4925 scope.go:117] "RemoveContainer" containerID="f5e335b7c4164fc1f53c789cc80bc2d6be09c4189bb1be6a80a88cfd80450fc6" Feb 02 11:58:45 crc kubenswrapper[4925]: E0202 11:58:45.462907 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5e335b7c4164fc1f53c789cc80bc2d6be09c4189bb1be6a80a88cfd80450fc6\": container with ID starting with f5e335b7c4164fc1f53c789cc80bc2d6be09c4189bb1be6a80a88cfd80450fc6 not found: ID does not exist" containerID="f5e335b7c4164fc1f53c789cc80bc2d6be09c4189bb1be6a80a88cfd80450fc6" Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.462959 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5e335b7c4164fc1f53c789cc80bc2d6be09c4189bb1be6a80a88cfd80450fc6"} err="failed to get container status \"f5e335b7c4164fc1f53c789cc80bc2d6be09c4189bb1be6a80a88cfd80450fc6\": rpc error: code = NotFound desc = could not find container \"f5e335b7c4164fc1f53c789cc80bc2d6be09c4189bb1be6a80a88cfd80450fc6\": container with ID starting with f5e335b7c4164fc1f53c789cc80bc2d6be09c4189bb1be6a80a88cfd80450fc6 not found: ID does not exist" Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.462993 4925 scope.go:117] "RemoveContainer" containerID="d2aa286245075cdd5fc0b187ce6bfa93338c454ddb596f4ab24026df3ce7e59b" Feb 02 11:58:45 crc kubenswrapper[4925]: E0202 11:58:45.463344 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2aa286245075cdd5fc0b187ce6bfa93338c454ddb596f4ab24026df3ce7e59b\": container with ID starting with d2aa286245075cdd5fc0b187ce6bfa93338c454ddb596f4ab24026df3ce7e59b not found: ID does not exist" containerID="d2aa286245075cdd5fc0b187ce6bfa93338c454ddb596f4ab24026df3ce7e59b" Feb 02 11:58:45 crc kubenswrapper[4925]: I0202 11:58:45.463369 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2aa286245075cdd5fc0b187ce6bfa93338c454ddb596f4ab24026df3ce7e59b"} err="failed to get container status \"d2aa286245075cdd5fc0b187ce6bfa93338c454ddb596f4ab24026df3ce7e59b\": rpc error: code = NotFound desc = could not find container \"d2aa286245075cdd5fc0b187ce6bfa93338c454ddb596f4ab24026df3ce7e59b\": container with ID starting with d2aa286245075cdd5fc0b187ce6bfa93338c454ddb596f4ab24026df3ce7e59b not found: ID does not exist" Feb 02 11:58:46 crc kubenswrapper[4925]: I0202 11:58:46.684159 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e414449-c1b9-4f0d-b698-cb6822c94e9b" path="/var/lib/kubelet/pods/9e414449-c1b9-4f0d-b698-cb6822c94e9b/volumes" Feb 02 11:59:13 crc kubenswrapper[4925]: I0202 11:59:13.398352 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:59:13 crc kubenswrapper[4925]: I0202 11:59:13.398786 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:59:13 crc kubenswrapper[4925]: I0202 11:59:13.398828 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 11:59:13 crc kubenswrapper[4925]: I0202 11:59:13.400586 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:59:13 crc kubenswrapper[4925]: I0202 11:59:13.400870 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" gracePeriod=600 Feb 02 11:59:13 crc kubenswrapper[4925]: E0202 11:59:13.541477 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:59:13 crc kubenswrapper[4925]: I0202 11:59:13.620739 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" exitCode=0 Feb 02 11:59:13 crc kubenswrapper[4925]: I0202 11:59:13.620795 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262"} Feb 02 11:59:13 crc kubenswrapper[4925]: I0202 11:59:13.620847 4925 scope.go:117] "RemoveContainer" containerID="1a55d69e8ca5c8a97f431423e99f4bd876e42a2de45199e51dd90540b909634e" Feb 02 11:59:13 crc kubenswrapper[4925]: I0202 11:59:13.621610 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 11:59:13 crc kubenswrapper[4925]: E0202 11:59:13.621924 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:59:19 crc kubenswrapper[4925]: I0202 11:59:19.527637 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hnffn"] Feb 02 11:59:19 crc kubenswrapper[4925]: E0202 11:59:19.528995 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e414449-c1b9-4f0d-b698-cb6822c94e9b" containerName="registry-server" Feb 02 11:59:19 crc kubenswrapper[4925]: I0202 11:59:19.529012 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e414449-c1b9-4f0d-b698-cb6822c94e9b" containerName="registry-server" Feb 02 11:59:19 crc kubenswrapper[4925]: E0202 11:59:19.529045 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e414449-c1b9-4f0d-b698-cb6822c94e9b" containerName="extract-utilities" Feb 02 11:59:19 crc kubenswrapper[4925]: I0202 11:59:19.529054 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e414449-c1b9-4f0d-b698-cb6822c94e9b" containerName="extract-utilities" Feb 02 11:59:19 crc kubenswrapper[4925]: E0202 11:59:19.529065 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e414449-c1b9-4f0d-b698-cb6822c94e9b" containerName="extract-content" Feb 02 11:59:19 crc kubenswrapper[4925]: I0202 11:59:19.529072 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e414449-c1b9-4f0d-b698-cb6822c94e9b" containerName="extract-content" Feb 02 11:59:19 crc kubenswrapper[4925]: I0202 11:59:19.529293 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e414449-c1b9-4f0d-b698-cb6822c94e9b" containerName="registry-server" Feb 02 11:59:19 crc kubenswrapper[4925]: I0202 11:59:19.531034 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnffn" Feb 02 11:59:19 crc kubenswrapper[4925]: I0202 11:59:19.541723 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hnffn"] Feb 02 11:59:19 crc kubenswrapper[4925]: I0202 11:59:19.572723 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk6qf\" (UniqueName: \"kubernetes.io/projected/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-kube-api-access-tk6qf\") pod \"redhat-operators-hnffn\" (UID: \"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458\") " pod="openshift-marketplace/redhat-operators-hnffn" Feb 02 11:59:19 crc kubenswrapper[4925]: I0202 11:59:19.572803 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-utilities\") pod \"redhat-operators-hnffn\" (UID: \"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458\") " pod="openshift-marketplace/redhat-operators-hnffn" Feb 02 11:59:19 crc kubenswrapper[4925]: I0202 11:59:19.572942 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-catalog-content\") pod \"redhat-operators-hnffn\" (UID: \"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458\") " pod="openshift-marketplace/redhat-operators-hnffn" Feb 02 11:59:19 crc kubenswrapper[4925]: I0202 11:59:19.674987 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-utilities\") pod \"redhat-operators-hnffn\" (UID: \"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458\") " pod="openshift-marketplace/redhat-operators-hnffn" Feb 02 11:59:19 crc kubenswrapper[4925]: I0202 11:59:19.675064 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-catalog-content\") pod \"redhat-operators-hnffn\" (UID: \"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458\") " pod="openshift-marketplace/redhat-operators-hnffn" Feb 02 11:59:19 crc kubenswrapper[4925]: I0202 11:59:19.675530 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-catalog-content\") pod \"redhat-operators-hnffn\" (UID: \"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458\") " pod="openshift-marketplace/redhat-operators-hnffn" Feb 02 11:59:19 crc kubenswrapper[4925]: I0202 11:59:19.675762 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk6qf\" (UniqueName: \"kubernetes.io/projected/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-kube-api-access-tk6qf\") pod \"redhat-operators-hnffn\" (UID: \"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458\") " pod="openshift-marketplace/redhat-operators-hnffn" Feb 02 11:59:19 crc kubenswrapper[4925]: I0202 11:59:19.675815 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-utilities\") pod \"redhat-operators-hnffn\" (UID: \"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458\") " pod="openshift-marketplace/redhat-operators-hnffn" Feb 02 11:59:19 crc kubenswrapper[4925]: I0202 11:59:19.719384 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk6qf\" (UniqueName: \"kubernetes.io/projected/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-kube-api-access-tk6qf\") pod \"redhat-operators-hnffn\" (UID: \"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458\") " pod="openshift-marketplace/redhat-operators-hnffn" Feb 02 11:59:19 crc kubenswrapper[4925]: I0202 11:59:19.851254 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnffn" Feb 02 11:59:20 crc kubenswrapper[4925]: I0202 11:59:20.731759 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hnffn"] Feb 02 11:59:21 crc kubenswrapper[4925]: I0202 11:59:21.714656 4925 generic.go:334] "Generic (PLEG): container finished" podID="ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458" containerID="c0be1d774d5e86a49a99a4f513e598fc114045b390ec65e3301eb91257a5a4d6" exitCode=0 Feb 02 11:59:21 crc kubenswrapper[4925]: I0202 11:59:21.717599 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnffn" event={"ID":"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458","Type":"ContainerDied","Data":"c0be1d774d5e86a49a99a4f513e598fc114045b390ec65e3301eb91257a5a4d6"} Feb 02 11:59:21 crc kubenswrapper[4925]: I0202 11:59:21.717636 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnffn" event={"ID":"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458","Type":"ContainerStarted","Data":"4a4dacf1a9ff47c50677c1f65f9a049f2abe8f7c25152efadcea8789b1a4aeb3"} Feb 02 11:59:21 crc kubenswrapper[4925]: I0202 11:59:21.720926 4925 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:59:23 crc kubenswrapper[4925]: I0202 11:59:23.731012 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnffn" event={"ID":"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458","Type":"ContainerStarted","Data":"b5e5b77a8fbdc75168c1956663cef8ea1c942e755da95cb6c776049ca38f9118"} Feb 02 11:59:24 crc kubenswrapper[4925]: I0202 11:59:24.742125 4925 generic.go:334] "Generic (PLEG): container finished" podID="ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458" containerID="b5e5b77a8fbdc75168c1956663cef8ea1c942e755da95cb6c776049ca38f9118" exitCode=0 Feb 02 11:59:24 crc kubenswrapper[4925]: I0202 11:59:24.742457 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnffn" event={"ID":"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458","Type":"ContainerDied","Data":"b5e5b77a8fbdc75168c1956663cef8ea1c942e755da95cb6c776049ca38f9118"} Feb 02 11:59:25 crc kubenswrapper[4925]: I0202 11:59:25.751612 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnffn" event={"ID":"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458","Type":"ContainerStarted","Data":"d6bb76830e4e079a079b021e33403155ed6edbe7a857e4a751324efc13493197"} Feb 02 11:59:25 crc kubenswrapper[4925]: I0202 11:59:25.771619 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hnffn" podStartSLOduration=3.3370229399999998 podStartE2EDuration="6.771597574s" podCreationTimestamp="2026-02-02 11:59:19 +0000 UTC" firstStartedPulling="2026-02-02 11:59:21.720297899 +0000 UTC m=+3738.724546861" lastFinishedPulling="2026-02-02 11:59:25.154872533 +0000 UTC m=+3742.159121495" observedRunningTime="2026-02-02 11:59:25.76888674 +0000 UTC m=+3742.773136202" watchObservedRunningTime="2026-02-02 11:59:25.771597574 +0000 UTC m=+3742.775846536" Feb 02 11:59:26 crc kubenswrapper[4925]: I0202 11:59:26.664303 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 11:59:26 crc kubenswrapper[4925]: E0202 11:59:26.664794 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:59:29 crc kubenswrapper[4925]: I0202 11:59:29.852734 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hnffn" Feb 02 11:59:29 crc kubenswrapper[4925]: I0202 11:59:29.853340 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hnffn" Feb 02 11:59:30 crc kubenswrapper[4925]: I0202 11:59:30.925803 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hnffn" podUID="ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458" containerName="registry-server" probeResult="failure" output=< Feb 02 11:59:30 crc kubenswrapper[4925]: timeout: failed to connect service ":50051" within 1s Feb 02 11:59:30 crc kubenswrapper[4925]: > Feb 02 11:59:40 crc kubenswrapper[4925]: I0202 11:59:40.586761 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hnffn" Feb 02 11:59:40 crc kubenswrapper[4925]: I0202 11:59:40.642842 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hnffn" Feb 02 11:59:40 crc kubenswrapper[4925]: I0202 11:59:40.664667 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 11:59:40 crc kubenswrapper[4925]: E0202 11:59:40.664941 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 11:59:40 crc kubenswrapper[4925]: I0202 11:59:40.825958 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hnffn"] Feb 02 11:59:41 crc kubenswrapper[4925]: I0202 11:59:41.876233 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hnffn" podUID="ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458" containerName="registry-server" containerID="cri-o://d6bb76830e4e079a079b021e33403155ed6edbe7a857e4a751324efc13493197" gracePeriod=2 Feb 02 11:59:42 crc kubenswrapper[4925]: I0202 11:59:42.887050 4925 generic.go:334] "Generic (PLEG): container finished" podID="ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458" containerID="d6bb76830e4e079a079b021e33403155ed6edbe7a857e4a751324efc13493197" exitCode=0 Feb 02 11:59:42 crc kubenswrapper[4925]: I0202 11:59:42.887124 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnffn" event={"ID":"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458","Type":"ContainerDied","Data":"d6bb76830e4e079a079b021e33403155ed6edbe7a857e4a751324efc13493197"} Feb 02 11:59:42 crc kubenswrapper[4925]: I0202 11:59:42.887544 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnffn" event={"ID":"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458","Type":"ContainerDied","Data":"4a4dacf1a9ff47c50677c1f65f9a049f2abe8f7c25152efadcea8789b1a4aeb3"} Feb 02 11:59:42 crc kubenswrapper[4925]: I0202 11:59:42.887563 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a4dacf1a9ff47c50677c1f65f9a049f2abe8f7c25152efadcea8789b1a4aeb3" Feb 02 11:59:42 crc kubenswrapper[4925]: I0202 11:59:42.911278 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnffn" Feb 02 11:59:43 crc kubenswrapper[4925]: I0202 11:59:43.027793 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-utilities\") pod \"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458\" (UID: \"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458\") " Feb 02 11:59:43 crc kubenswrapper[4925]: I0202 11:59:43.027858 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-catalog-content\") pod \"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458\" (UID: \"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458\") " Feb 02 11:59:43 crc kubenswrapper[4925]: I0202 11:59:43.027919 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk6qf\" (UniqueName: \"kubernetes.io/projected/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-kube-api-access-tk6qf\") pod \"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458\" (UID: \"ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458\") " Feb 02 11:59:43 crc kubenswrapper[4925]: I0202 11:59:43.029030 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-utilities" (OuterVolumeSpecName: "utilities") pod "ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458" (UID: "ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:59:43 crc kubenswrapper[4925]: I0202 11:59:43.033401 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-kube-api-access-tk6qf" (OuterVolumeSpecName: "kube-api-access-tk6qf") pod "ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458" (UID: "ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458"). InnerVolumeSpecName "kube-api-access-tk6qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:59:43 crc kubenswrapper[4925]: I0202 11:59:43.131045 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:43 crc kubenswrapper[4925]: I0202 11:59:43.131308 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk6qf\" (UniqueName: \"kubernetes.io/projected/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-kube-api-access-tk6qf\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:43 crc kubenswrapper[4925]: I0202 11:59:43.156315 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458" (UID: "ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:59:43 crc kubenswrapper[4925]: I0202 11:59:43.233269 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:43 crc kubenswrapper[4925]: I0202 11:59:43.895515 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnffn" Feb 02 11:59:43 crc kubenswrapper[4925]: I0202 11:59:43.931172 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hnffn"] Feb 02 11:59:43 crc kubenswrapper[4925]: I0202 11:59:43.940640 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hnffn"] Feb 02 11:59:44 crc kubenswrapper[4925]: I0202 11:59:44.676092 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458" path="/var/lib/kubelet/pods/ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458/volumes" Feb 02 11:59:51 crc kubenswrapper[4925]: I0202 11:59:51.665245 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 11:59:51 crc kubenswrapper[4925]: E0202 11:59:51.666615 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.191807 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx"] Feb 02 12:00:00 crc kubenswrapper[4925]: E0202 12:00:00.192980 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458" containerName="extract-content" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.192997 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458" containerName="extract-content" Feb 02 12:00:00 crc kubenswrapper[4925]: E0202 12:00:00.193013 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458" containerName="extract-utilities" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.193020 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458" containerName="extract-utilities" Feb 02 12:00:00 crc kubenswrapper[4925]: E0202 12:00:00.193030 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.193037 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.193277 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebc6f4e1-b0a1-418f-974c-1f7cbbb0e458" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.193972 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.202041 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx"] Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.202749 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.203048 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.291525 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpnlv\" (UniqueName: \"kubernetes.io/projected/4419339b-0b98-4c2e-9019-8adee9fd1ae1-kube-api-access-hpnlv\") pod \"collect-profiles-29500560-tkkqx\" (UID: \"4419339b-0b98-4c2e-9019-8adee9fd1ae1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.291722 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4419339b-0b98-4c2e-9019-8adee9fd1ae1-config-volume\") pod \"collect-profiles-29500560-tkkqx\" (UID: \"4419339b-0b98-4c2e-9019-8adee9fd1ae1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.291817 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4419339b-0b98-4c2e-9019-8adee9fd1ae1-secret-volume\") pod \"collect-profiles-29500560-tkkqx\" (UID: \"4419339b-0b98-4c2e-9019-8adee9fd1ae1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.394136 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpnlv\" (UniqueName: \"kubernetes.io/projected/4419339b-0b98-4c2e-9019-8adee9fd1ae1-kube-api-access-hpnlv\") pod \"collect-profiles-29500560-tkkqx\" (UID: \"4419339b-0b98-4c2e-9019-8adee9fd1ae1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.394238 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4419339b-0b98-4c2e-9019-8adee9fd1ae1-config-volume\") pod \"collect-profiles-29500560-tkkqx\" (UID: \"4419339b-0b98-4c2e-9019-8adee9fd1ae1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.394380 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4419339b-0b98-4c2e-9019-8adee9fd1ae1-secret-volume\") pod \"collect-profiles-29500560-tkkqx\" (UID: \"4419339b-0b98-4c2e-9019-8adee9fd1ae1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.395524 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4419339b-0b98-4c2e-9019-8adee9fd1ae1-config-volume\") pod \"collect-profiles-29500560-tkkqx\" (UID: \"4419339b-0b98-4c2e-9019-8adee9fd1ae1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.403058 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4419339b-0b98-4c2e-9019-8adee9fd1ae1-secret-volume\") pod \"collect-profiles-29500560-tkkqx\" (UID: \"4419339b-0b98-4c2e-9019-8adee9fd1ae1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.412704 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpnlv\" (UniqueName: \"kubernetes.io/projected/4419339b-0b98-4c2e-9019-8adee9fd1ae1-kube-api-access-hpnlv\") pod \"collect-profiles-29500560-tkkqx\" (UID: \"4419339b-0b98-4c2e-9019-8adee9fd1ae1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx" Feb 02 12:00:00 crc kubenswrapper[4925]: I0202 12:00:00.525466 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx" Feb 02 12:00:01 crc kubenswrapper[4925]: I0202 12:00:01.016699 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx"] Feb 02 12:00:01 crc kubenswrapper[4925]: I0202 12:00:01.033572 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx" event={"ID":"4419339b-0b98-4c2e-9019-8adee9fd1ae1","Type":"ContainerStarted","Data":"1c9b7c140de810c9f0316591d94488fac6bd6f5f65a362ded458e4e9dbe3d75f"} Feb 02 12:00:01 crc kubenswrapper[4925]: E0202 12:00:01.733279 4925 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4419339b_0b98_4c2e_9019_8adee9fd1ae1.slice/crio-f89518e907cf2a4034a515a7c3f27bb86a650235a3dc120143436216559f1e34.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4419339b_0b98_4c2e_9019_8adee9fd1ae1.slice/crio-conmon-f89518e907cf2a4034a515a7c3f27bb86a650235a3dc120143436216559f1e34.scope\": RecentStats: unable to find data in memory cache]" Feb 02 12:00:02 crc kubenswrapper[4925]: I0202 12:00:02.043709 4925 generic.go:334] "Generic (PLEG): container finished" podID="4419339b-0b98-4c2e-9019-8adee9fd1ae1" containerID="f89518e907cf2a4034a515a7c3f27bb86a650235a3dc120143436216559f1e34" exitCode=0 Feb 02 12:00:02 crc kubenswrapper[4925]: I0202 12:00:02.043756 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx" event={"ID":"4419339b-0b98-4c2e-9019-8adee9fd1ae1","Type":"ContainerDied","Data":"f89518e907cf2a4034a515a7c3f27bb86a650235a3dc120143436216559f1e34"} Feb 02 12:00:03 crc kubenswrapper[4925]: I0202 12:00:03.664007 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:00:03 crc kubenswrapper[4925]: E0202 12:00:03.664871 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:00:03 crc kubenswrapper[4925]: I0202 12:00:03.668003 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx" Feb 02 12:00:03 crc kubenswrapper[4925]: I0202 12:00:03.761832 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4419339b-0b98-4c2e-9019-8adee9fd1ae1-secret-volume\") pod \"4419339b-0b98-4c2e-9019-8adee9fd1ae1\" (UID: \"4419339b-0b98-4c2e-9019-8adee9fd1ae1\") " Feb 02 12:00:03 crc kubenswrapper[4925]: I0202 12:00:03.778310 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4419339b-0b98-4c2e-9019-8adee9fd1ae1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4419339b-0b98-4c2e-9019-8adee9fd1ae1" (UID: "4419339b-0b98-4c2e-9019-8adee9fd1ae1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:00:03 crc kubenswrapper[4925]: I0202 12:00:03.864830 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpnlv\" (UniqueName: \"kubernetes.io/projected/4419339b-0b98-4c2e-9019-8adee9fd1ae1-kube-api-access-hpnlv\") pod \"4419339b-0b98-4c2e-9019-8adee9fd1ae1\" (UID: \"4419339b-0b98-4c2e-9019-8adee9fd1ae1\") " Feb 02 12:00:03 crc kubenswrapper[4925]: I0202 12:00:03.865387 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4419339b-0b98-4c2e-9019-8adee9fd1ae1-config-volume\") pod \"4419339b-0b98-4c2e-9019-8adee9fd1ae1\" (UID: \"4419339b-0b98-4c2e-9019-8adee9fd1ae1\") " Feb 02 12:00:03 crc kubenswrapper[4925]: I0202 12:00:03.866771 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4419339b-0b98-4c2e-9019-8adee9fd1ae1-config-volume" (OuterVolumeSpecName: "config-volume") pod "4419339b-0b98-4c2e-9019-8adee9fd1ae1" (UID: "4419339b-0b98-4c2e-9019-8adee9fd1ae1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 12:00:03 crc kubenswrapper[4925]: I0202 12:00:03.866889 4925 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4419339b-0b98-4c2e-9019-8adee9fd1ae1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:03 crc kubenswrapper[4925]: I0202 12:00:03.873263 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4419339b-0b98-4c2e-9019-8adee9fd1ae1-kube-api-access-hpnlv" (OuterVolumeSpecName: "kube-api-access-hpnlv") pod "4419339b-0b98-4c2e-9019-8adee9fd1ae1" (UID: "4419339b-0b98-4c2e-9019-8adee9fd1ae1"). InnerVolumeSpecName "kube-api-access-hpnlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:00:03 crc kubenswrapper[4925]: I0202 12:00:03.969347 4925 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4419339b-0b98-4c2e-9019-8adee9fd1ae1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:03 crc kubenswrapper[4925]: I0202 12:00:03.969410 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpnlv\" (UniqueName: \"kubernetes.io/projected/4419339b-0b98-4c2e-9019-8adee9fd1ae1-kube-api-access-hpnlv\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:04 crc kubenswrapper[4925]: I0202 12:00:04.060519 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx" event={"ID":"4419339b-0b98-4c2e-9019-8adee9fd1ae1","Type":"ContainerDied","Data":"1c9b7c140de810c9f0316591d94488fac6bd6f5f65a362ded458e4e9dbe3d75f"} Feb 02 12:00:04 crc kubenswrapper[4925]: I0202 12:00:04.060588 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c9b7c140de810c9f0316591d94488fac6bd6f5f65a362ded458e4e9dbe3d75f" Feb 02 12:00:04 crc kubenswrapper[4925]: I0202 12:00:04.060608 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-tkkqx" Feb 02 12:00:04 crc kubenswrapper[4925]: I0202 12:00:04.755745 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp"] Feb 02 12:00:04 crc kubenswrapper[4925]: I0202 12:00:04.769711 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-g7vlp"] Feb 02 12:00:06 crc kubenswrapper[4925]: I0202 12:00:06.680336 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8a4ad92-ad81-471a-9495-15b9398f8eb4" path="/var/lib/kubelet/pods/f8a4ad92-ad81-471a-9495-15b9398f8eb4/volumes" Feb 02 12:00:15 crc kubenswrapper[4925]: I0202 12:00:15.664345 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:00:15 crc kubenswrapper[4925]: E0202 12:00:15.665136 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:00:30 crc kubenswrapper[4925]: I0202 12:00:30.664765 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:00:30 crc kubenswrapper[4925]: E0202 12:00:30.665746 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:00:30 crc kubenswrapper[4925]: I0202 12:00:30.691069 4925 scope.go:117] "RemoveContainer" containerID="fa9c5f0c1d1526483a03d5dc5a9db99b8d216756d18f9b67b429d0223aab3dd2" Feb 02 12:00:45 crc kubenswrapper[4925]: I0202 12:00:45.665059 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:00:45 crc kubenswrapper[4925]: E0202 12:00:45.666062 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:00:57 crc kubenswrapper[4925]: I0202 12:00:57.664776 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:00:57 crc kubenswrapper[4925]: E0202 12:00:57.666240 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.150940 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29500561-khb48"] Feb 02 12:01:00 crc kubenswrapper[4925]: E0202 12:01:00.152105 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4419339b-0b98-4c2e-9019-8adee9fd1ae1" containerName="collect-profiles" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.152120 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="4419339b-0b98-4c2e-9019-8adee9fd1ae1" containerName="collect-profiles" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.152355 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="4419339b-0b98-4c2e-9019-8adee9fd1ae1" containerName="collect-profiles" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.153174 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500561-khb48" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.164970 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500561-khb48"] Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.297822 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-combined-ca-bundle\") pod \"keystone-cron-29500561-khb48\" (UID: \"90190a5e-2227-47b8-83e8-4f3f26891a14\") " pod="openstack/keystone-cron-29500561-khb48" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.297872 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-config-data\") pod \"keystone-cron-29500561-khb48\" (UID: \"90190a5e-2227-47b8-83e8-4f3f26891a14\") " pod="openstack/keystone-cron-29500561-khb48" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.297902 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srdh7\" (UniqueName: \"kubernetes.io/projected/90190a5e-2227-47b8-83e8-4f3f26891a14-kube-api-access-srdh7\") pod \"keystone-cron-29500561-khb48\" (UID: \"90190a5e-2227-47b8-83e8-4f3f26891a14\") " pod="openstack/keystone-cron-29500561-khb48" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.298105 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-fernet-keys\") pod \"keystone-cron-29500561-khb48\" (UID: \"90190a5e-2227-47b8-83e8-4f3f26891a14\") " pod="openstack/keystone-cron-29500561-khb48" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.400546 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-combined-ca-bundle\") pod \"keystone-cron-29500561-khb48\" (UID: \"90190a5e-2227-47b8-83e8-4f3f26891a14\") " pod="openstack/keystone-cron-29500561-khb48" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.400598 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-config-data\") pod \"keystone-cron-29500561-khb48\" (UID: \"90190a5e-2227-47b8-83e8-4f3f26891a14\") " pod="openstack/keystone-cron-29500561-khb48" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.400623 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srdh7\" (UniqueName: \"kubernetes.io/projected/90190a5e-2227-47b8-83e8-4f3f26891a14-kube-api-access-srdh7\") pod \"keystone-cron-29500561-khb48\" (UID: \"90190a5e-2227-47b8-83e8-4f3f26891a14\") " pod="openstack/keystone-cron-29500561-khb48" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.400664 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-fernet-keys\") pod \"keystone-cron-29500561-khb48\" (UID: \"90190a5e-2227-47b8-83e8-4f3f26891a14\") " pod="openstack/keystone-cron-29500561-khb48" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.407144 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-fernet-keys\") pod \"keystone-cron-29500561-khb48\" (UID: \"90190a5e-2227-47b8-83e8-4f3f26891a14\") " pod="openstack/keystone-cron-29500561-khb48" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.421010 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-combined-ca-bundle\") pod \"keystone-cron-29500561-khb48\" (UID: \"90190a5e-2227-47b8-83e8-4f3f26891a14\") " pod="openstack/keystone-cron-29500561-khb48" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.422326 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srdh7\" (UniqueName: \"kubernetes.io/projected/90190a5e-2227-47b8-83e8-4f3f26891a14-kube-api-access-srdh7\") pod \"keystone-cron-29500561-khb48\" (UID: \"90190a5e-2227-47b8-83e8-4f3f26891a14\") " pod="openstack/keystone-cron-29500561-khb48" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.425400 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-config-data\") pod \"keystone-cron-29500561-khb48\" (UID: \"90190a5e-2227-47b8-83e8-4f3f26891a14\") " pod="openstack/keystone-cron-29500561-khb48" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.470534 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500561-khb48" Feb 02 12:01:00 crc kubenswrapper[4925]: I0202 12:01:00.966042 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500561-khb48"] Feb 02 12:01:01 crc kubenswrapper[4925]: I0202 12:01:01.555135 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500561-khb48" event={"ID":"90190a5e-2227-47b8-83e8-4f3f26891a14","Type":"ContainerStarted","Data":"b292e5debae428b4de9f5140b213ca8e22f89d2f7bcbba0adc835e2d484c9f95"} Feb 02 12:01:01 crc kubenswrapper[4925]: I0202 12:01:01.555183 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500561-khb48" event={"ID":"90190a5e-2227-47b8-83e8-4f3f26891a14","Type":"ContainerStarted","Data":"a4eaf44b621f91fa16c2dcaf83b6c93816c02c7db837fb00adf86a8258e84953"} Feb 02 12:01:01 crc kubenswrapper[4925]: I0202 12:01:01.575047 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29500561-khb48" podStartSLOduration=1.57502231 podStartE2EDuration="1.57502231s" podCreationTimestamp="2026-02-02 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 12:01:01.569761627 +0000 UTC m=+3838.574010589" watchObservedRunningTime="2026-02-02 12:01:01.57502231 +0000 UTC m=+3838.579271272" Feb 02 12:01:04 crc kubenswrapper[4925]: I0202 12:01:04.579324 4925 generic.go:334] "Generic (PLEG): container finished" podID="90190a5e-2227-47b8-83e8-4f3f26891a14" containerID="b292e5debae428b4de9f5140b213ca8e22f89d2f7bcbba0adc835e2d484c9f95" exitCode=0 Feb 02 12:01:04 crc kubenswrapper[4925]: I0202 12:01:04.579417 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500561-khb48" event={"ID":"90190a5e-2227-47b8-83e8-4f3f26891a14","Type":"ContainerDied","Data":"b292e5debae428b4de9f5140b213ca8e22f89d2f7bcbba0adc835e2d484c9f95"} Feb 02 12:01:06 crc kubenswrapper[4925]: I0202 12:01:06.021225 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500561-khb48" Feb 02 12:01:06 crc kubenswrapper[4925]: I0202 12:01:06.115492 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srdh7\" (UniqueName: \"kubernetes.io/projected/90190a5e-2227-47b8-83e8-4f3f26891a14-kube-api-access-srdh7\") pod \"90190a5e-2227-47b8-83e8-4f3f26891a14\" (UID: \"90190a5e-2227-47b8-83e8-4f3f26891a14\") " Feb 02 12:01:06 crc kubenswrapper[4925]: I0202 12:01:06.115641 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-config-data\") pod \"90190a5e-2227-47b8-83e8-4f3f26891a14\" (UID: \"90190a5e-2227-47b8-83e8-4f3f26891a14\") " Feb 02 12:01:06 crc kubenswrapper[4925]: I0202 12:01:06.115668 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-combined-ca-bundle\") pod \"90190a5e-2227-47b8-83e8-4f3f26891a14\" (UID: \"90190a5e-2227-47b8-83e8-4f3f26891a14\") " Feb 02 12:01:06 crc kubenswrapper[4925]: I0202 12:01:06.115817 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-fernet-keys\") pod \"90190a5e-2227-47b8-83e8-4f3f26891a14\" (UID: \"90190a5e-2227-47b8-83e8-4f3f26891a14\") " Feb 02 12:01:06 crc kubenswrapper[4925]: I0202 12:01:06.121515 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "90190a5e-2227-47b8-83e8-4f3f26891a14" (UID: "90190a5e-2227-47b8-83e8-4f3f26891a14"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:01:06 crc kubenswrapper[4925]: I0202 12:01:06.121674 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90190a5e-2227-47b8-83e8-4f3f26891a14-kube-api-access-srdh7" (OuterVolumeSpecName: "kube-api-access-srdh7") pod "90190a5e-2227-47b8-83e8-4f3f26891a14" (UID: "90190a5e-2227-47b8-83e8-4f3f26891a14"). InnerVolumeSpecName "kube-api-access-srdh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:01:06 crc kubenswrapper[4925]: I0202 12:01:06.145121 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90190a5e-2227-47b8-83e8-4f3f26891a14" (UID: "90190a5e-2227-47b8-83e8-4f3f26891a14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:01:06 crc kubenswrapper[4925]: I0202 12:01:06.186519 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-config-data" (OuterVolumeSpecName: "config-data") pod "90190a5e-2227-47b8-83e8-4f3f26891a14" (UID: "90190a5e-2227-47b8-83e8-4f3f26891a14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:01:06 crc kubenswrapper[4925]: I0202 12:01:06.218268 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 12:01:06 crc kubenswrapper[4925]: I0202 12:01:06.218693 4925 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 12:01:06 crc kubenswrapper[4925]: I0202 12:01:06.218713 4925 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/90190a5e-2227-47b8-83e8-4f3f26891a14-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 12:01:06 crc kubenswrapper[4925]: I0202 12:01:06.218727 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srdh7\" (UniqueName: \"kubernetes.io/projected/90190a5e-2227-47b8-83e8-4f3f26891a14-kube-api-access-srdh7\") on node \"crc\" DevicePath \"\"" Feb 02 12:01:06 crc kubenswrapper[4925]: I0202 12:01:06.603315 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500561-khb48" event={"ID":"90190a5e-2227-47b8-83e8-4f3f26891a14","Type":"ContainerDied","Data":"a4eaf44b621f91fa16c2dcaf83b6c93816c02c7db837fb00adf86a8258e84953"} Feb 02 12:01:06 crc kubenswrapper[4925]: I0202 12:01:06.603356 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4eaf44b621f91fa16c2dcaf83b6c93816c02c7db837fb00adf86a8258e84953" Feb 02 12:01:06 crc kubenswrapper[4925]: I0202 12:01:06.603417 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500561-khb48" Feb 02 12:01:09 crc kubenswrapper[4925]: I0202 12:01:09.664778 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:01:09 crc kubenswrapper[4925]: E0202 12:01:09.665570 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:01:21 crc kubenswrapper[4925]: I0202 12:01:21.665176 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:01:21 crc kubenswrapper[4925]: E0202 12:01:21.666219 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:01:33 crc kubenswrapper[4925]: I0202 12:01:33.664734 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:01:33 crc kubenswrapper[4925]: E0202 12:01:33.665878 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:01:45 crc kubenswrapper[4925]: I0202 12:01:45.664573 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:01:45 crc kubenswrapper[4925]: E0202 12:01:45.665572 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:01:58 crc kubenswrapper[4925]: I0202 12:01:58.664676 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:01:58 crc kubenswrapper[4925]: E0202 12:01:58.665526 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:02:09 crc kubenswrapper[4925]: I0202 12:02:09.691920 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fsmmv"] Feb 02 12:02:09 crc kubenswrapper[4925]: E0202 12:02:09.692999 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90190a5e-2227-47b8-83e8-4f3f26891a14" containerName="keystone-cron" Feb 02 12:02:09 crc kubenswrapper[4925]: I0202 12:02:09.693016 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="90190a5e-2227-47b8-83e8-4f3f26891a14" containerName="keystone-cron" Feb 02 12:02:09 crc kubenswrapper[4925]: I0202 12:02:09.693346 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="90190a5e-2227-47b8-83e8-4f3f26891a14" containerName="keystone-cron" Feb 02 12:02:09 crc kubenswrapper[4925]: I0202 12:02:09.694720 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsmmv" Feb 02 12:02:09 crc kubenswrapper[4925]: I0202 12:02:09.707122 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsmmv"] Feb 02 12:02:09 crc kubenswrapper[4925]: I0202 12:02:09.844885 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c29z2\" (UniqueName: \"kubernetes.io/projected/4f785e96-cf71-4bd5-bd96-bd589e582244-kube-api-access-c29z2\") pod \"certified-operators-fsmmv\" (UID: \"4f785e96-cf71-4bd5-bd96-bd589e582244\") " pod="openshift-marketplace/certified-operators-fsmmv" Feb 02 12:02:09 crc kubenswrapper[4925]: I0202 12:02:09.845245 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f785e96-cf71-4bd5-bd96-bd589e582244-catalog-content\") pod \"certified-operators-fsmmv\" (UID: \"4f785e96-cf71-4bd5-bd96-bd589e582244\") " pod="openshift-marketplace/certified-operators-fsmmv" Feb 02 12:02:09 crc kubenswrapper[4925]: I0202 12:02:09.845494 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f785e96-cf71-4bd5-bd96-bd589e582244-utilities\") pod \"certified-operators-fsmmv\" (UID: \"4f785e96-cf71-4bd5-bd96-bd589e582244\") " pod="openshift-marketplace/certified-operators-fsmmv" Feb 02 12:02:09 crc kubenswrapper[4925]: I0202 12:02:09.947915 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f785e96-cf71-4bd5-bd96-bd589e582244-utilities\") pod \"certified-operators-fsmmv\" (UID: \"4f785e96-cf71-4bd5-bd96-bd589e582244\") " pod="openshift-marketplace/certified-operators-fsmmv" Feb 02 12:02:09 crc kubenswrapper[4925]: I0202 12:02:09.948044 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c29z2\" (UniqueName: \"kubernetes.io/projected/4f785e96-cf71-4bd5-bd96-bd589e582244-kube-api-access-c29z2\") pod \"certified-operators-fsmmv\" (UID: \"4f785e96-cf71-4bd5-bd96-bd589e582244\") " pod="openshift-marketplace/certified-operators-fsmmv" Feb 02 12:02:09 crc kubenswrapper[4925]: I0202 12:02:09.948114 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f785e96-cf71-4bd5-bd96-bd589e582244-catalog-content\") pod \"certified-operators-fsmmv\" (UID: \"4f785e96-cf71-4bd5-bd96-bd589e582244\") " pod="openshift-marketplace/certified-operators-fsmmv" Feb 02 12:02:09 crc kubenswrapper[4925]: I0202 12:02:09.948497 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f785e96-cf71-4bd5-bd96-bd589e582244-utilities\") pod \"certified-operators-fsmmv\" (UID: \"4f785e96-cf71-4bd5-bd96-bd589e582244\") " pod="openshift-marketplace/certified-operators-fsmmv" Feb 02 12:02:09 crc kubenswrapper[4925]: I0202 12:02:09.948575 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f785e96-cf71-4bd5-bd96-bd589e582244-catalog-content\") pod \"certified-operators-fsmmv\" (UID: \"4f785e96-cf71-4bd5-bd96-bd589e582244\") " pod="openshift-marketplace/certified-operators-fsmmv" Feb 02 12:02:09 crc kubenswrapper[4925]: I0202 12:02:09.979946 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c29z2\" (UniqueName: \"kubernetes.io/projected/4f785e96-cf71-4bd5-bd96-bd589e582244-kube-api-access-c29z2\") pod \"certified-operators-fsmmv\" (UID: \"4f785e96-cf71-4bd5-bd96-bd589e582244\") " pod="openshift-marketplace/certified-operators-fsmmv" Feb 02 12:02:10 crc kubenswrapper[4925]: I0202 12:02:10.015703 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsmmv" Feb 02 12:02:10 crc kubenswrapper[4925]: I0202 12:02:10.608721 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsmmv"] Feb 02 12:02:11 crc kubenswrapper[4925]: I0202 12:02:11.131186 4925 generic.go:334] "Generic (PLEG): container finished" podID="4f785e96-cf71-4bd5-bd96-bd589e582244" containerID="2fae715c6db473f510df15f83910919c805b4eb9548e7b76103603a2ce937b38" exitCode=0 Feb 02 12:02:11 crc kubenswrapper[4925]: I0202 12:02:11.131249 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsmmv" event={"ID":"4f785e96-cf71-4bd5-bd96-bd589e582244","Type":"ContainerDied","Data":"2fae715c6db473f510df15f83910919c805b4eb9548e7b76103603a2ce937b38"} Feb 02 12:02:11 crc kubenswrapper[4925]: I0202 12:02:11.131445 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsmmv" event={"ID":"4f785e96-cf71-4bd5-bd96-bd589e582244","Type":"ContainerStarted","Data":"5849db6604baa02666f4538ec935be1cd3b9e830510045ceb76f5e3207a00525"} Feb 02 12:02:11 crc kubenswrapper[4925]: I0202 12:02:11.666250 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:02:11 crc kubenswrapper[4925]: E0202 12:02:11.666609 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:02:13 crc kubenswrapper[4925]: I0202 12:02:13.178287 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsmmv" event={"ID":"4f785e96-cf71-4bd5-bd96-bd589e582244","Type":"ContainerStarted","Data":"d7ed2c11d1f224cd3fd8337fcea183c153ed3420a158ec05be9fe566e9d2c364"} Feb 02 12:02:14 crc kubenswrapper[4925]: I0202 12:02:14.189429 4925 generic.go:334] "Generic (PLEG): container finished" podID="4f785e96-cf71-4bd5-bd96-bd589e582244" containerID="d7ed2c11d1f224cd3fd8337fcea183c153ed3420a158ec05be9fe566e9d2c364" exitCode=0 Feb 02 12:02:14 crc kubenswrapper[4925]: I0202 12:02:14.189497 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsmmv" event={"ID":"4f785e96-cf71-4bd5-bd96-bd589e582244","Type":"ContainerDied","Data":"d7ed2c11d1f224cd3fd8337fcea183c153ed3420a158ec05be9fe566e9d2c364"} Feb 02 12:02:15 crc kubenswrapper[4925]: I0202 12:02:15.201003 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsmmv" event={"ID":"4f785e96-cf71-4bd5-bd96-bd589e582244","Type":"ContainerStarted","Data":"55e5b425f62b3cfa45a7beb81827eba13b211ecc621d2c4b70f5f078759c4125"} Feb 02 12:02:15 crc kubenswrapper[4925]: I0202 12:02:15.218793 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fsmmv" podStartSLOduration=2.71690905 podStartE2EDuration="6.218775301s" podCreationTimestamp="2026-02-02 12:02:09 +0000 UTC" firstStartedPulling="2026-02-02 12:02:11.132871057 +0000 UTC m=+3908.137120009" lastFinishedPulling="2026-02-02 12:02:14.634737298 +0000 UTC m=+3911.638986260" observedRunningTime="2026-02-02 12:02:15.216276053 +0000 UTC m=+3912.220525025" watchObservedRunningTime="2026-02-02 12:02:15.218775301 +0000 UTC m=+3912.223024283" Feb 02 12:02:20 crc kubenswrapper[4925]: I0202 12:02:20.016427 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fsmmv" Feb 02 12:02:20 crc kubenswrapper[4925]: I0202 12:02:20.016970 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fsmmv" Feb 02 12:02:20 crc kubenswrapper[4925]: I0202 12:02:20.067514 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fsmmv" Feb 02 12:02:20 crc kubenswrapper[4925]: I0202 12:02:20.292793 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fsmmv" Feb 02 12:02:20 crc kubenswrapper[4925]: I0202 12:02:20.345727 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsmmv"] Feb 02 12:02:22 crc kubenswrapper[4925]: I0202 12:02:22.256674 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fsmmv" podUID="4f785e96-cf71-4bd5-bd96-bd589e582244" containerName="registry-server" containerID="cri-o://55e5b425f62b3cfa45a7beb81827eba13b211ecc621d2c4b70f5f078759c4125" gracePeriod=2 Feb 02 12:02:22 crc kubenswrapper[4925]: I0202 12:02:22.664382 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:02:22 crc kubenswrapper[4925]: E0202 12:02:22.665001 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:02:22 crc kubenswrapper[4925]: I0202 12:02:22.883936 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsmmv" Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.004879 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f785e96-cf71-4bd5-bd96-bd589e582244-utilities\") pod \"4f785e96-cf71-4bd5-bd96-bd589e582244\" (UID: \"4f785e96-cf71-4bd5-bd96-bd589e582244\") " Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.005028 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c29z2\" (UniqueName: \"kubernetes.io/projected/4f785e96-cf71-4bd5-bd96-bd589e582244-kube-api-access-c29z2\") pod \"4f785e96-cf71-4bd5-bd96-bd589e582244\" (UID: \"4f785e96-cf71-4bd5-bd96-bd589e582244\") " Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.005197 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f785e96-cf71-4bd5-bd96-bd589e582244-catalog-content\") pod \"4f785e96-cf71-4bd5-bd96-bd589e582244\" (UID: \"4f785e96-cf71-4bd5-bd96-bd589e582244\") " Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.006180 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f785e96-cf71-4bd5-bd96-bd589e582244-utilities" (OuterVolumeSpecName: "utilities") pod "4f785e96-cf71-4bd5-bd96-bd589e582244" (UID: "4f785e96-cf71-4bd5-bd96-bd589e582244"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.020386 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f785e96-cf71-4bd5-bd96-bd589e582244-kube-api-access-c29z2" (OuterVolumeSpecName: "kube-api-access-c29z2") pod "4f785e96-cf71-4bd5-bd96-bd589e582244" (UID: "4f785e96-cf71-4bd5-bd96-bd589e582244"). InnerVolumeSpecName "kube-api-access-c29z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.058388 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f785e96-cf71-4bd5-bd96-bd589e582244-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f785e96-cf71-4bd5-bd96-bd589e582244" (UID: "4f785e96-cf71-4bd5-bd96-bd589e582244"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.107004 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f785e96-cf71-4bd5-bd96-bd589e582244-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.107036 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f785e96-cf71-4bd5-bd96-bd589e582244-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.107045 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c29z2\" (UniqueName: \"kubernetes.io/projected/4f785e96-cf71-4bd5-bd96-bd589e582244-kube-api-access-c29z2\") on node \"crc\" DevicePath \"\"" Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.276837 4925 generic.go:334] "Generic (PLEG): container finished" podID="4f785e96-cf71-4bd5-bd96-bd589e582244" containerID="55e5b425f62b3cfa45a7beb81827eba13b211ecc621d2c4b70f5f078759c4125" exitCode=0 Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.276873 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsmmv" event={"ID":"4f785e96-cf71-4bd5-bd96-bd589e582244","Type":"ContainerDied","Data":"55e5b425f62b3cfa45a7beb81827eba13b211ecc621d2c4b70f5f078759c4125"} Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.276940 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsmmv" event={"ID":"4f785e96-cf71-4bd5-bd96-bd589e582244","Type":"ContainerDied","Data":"5849db6604baa02666f4538ec935be1cd3b9e830510045ceb76f5e3207a00525"} Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.276960 4925 scope.go:117] "RemoveContainer" containerID="55e5b425f62b3cfa45a7beb81827eba13b211ecc621d2c4b70f5f078759c4125" Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.276959 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsmmv" Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.302346 4925 scope.go:117] "RemoveContainer" containerID="d7ed2c11d1f224cd3fd8337fcea183c153ed3420a158ec05be9fe566e9d2c364" Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.308449 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsmmv"] Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.328457 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fsmmv"] Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.331387 4925 scope.go:117] "RemoveContainer" containerID="2fae715c6db473f510df15f83910919c805b4eb9548e7b76103603a2ce937b38" Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.374201 4925 scope.go:117] "RemoveContainer" containerID="55e5b425f62b3cfa45a7beb81827eba13b211ecc621d2c4b70f5f078759c4125" Feb 02 12:02:23 crc kubenswrapper[4925]: E0202 12:02:23.374989 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e5b425f62b3cfa45a7beb81827eba13b211ecc621d2c4b70f5f078759c4125\": container with ID starting with 55e5b425f62b3cfa45a7beb81827eba13b211ecc621d2c4b70f5f078759c4125 not found: ID does not exist" containerID="55e5b425f62b3cfa45a7beb81827eba13b211ecc621d2c4b70f5f078759c4125" Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.375030 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e5b425f62b3cfa45a7beb81827eba13b211ecc621d2c4b70f5f078759c4125"} err="failed to get container status \"55e5b425f62b3cfa45a7beb81827eba13b211ecc621d2c4b70f5f078759c4125\": rpc error: code = NotFound desc = could not find container \"55e5b425f62b3cfa45a7beb81827eba13b211ecc621d2c4b70f5f078759c4125\": container with ID starting with 55e5b425f62b3cfa45a7beb81827eba13b211ecc621d2c4b70f5f078759c4125 not found: ID does not exist" Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.375053 4925 scope.go:117] "RemoveContainer" containerID="d7ed2c11d1f224cd3fd8337fcea183c153ed3420a158ec05be9fe566e9d2c364" Feb 02 12:02:23 crc kubenswrapper[4925]: E0202 12:02:23.375474 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7ed2c11d1f224cd3fd8337fcea183c153ed3420a158ec05be9fe566e9d2c364\": container with ID starting with d7ed2c11d1f224cd3fd8337fcea183c153ed3420a158ec05be9fe566e9d2c364 not found: ID does not exist" containerID="d7ed2c11d1f224cd3fd8337fcea183c153ed3420a158ec05be9fe566e9d2c364" Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.375503 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7ed2c11d1f224cd3fd8337fcea183c153ed3420a158ec05be9fe566e9d2c364"} err="failed to get container status \"d7ed2c11d1f224cd3fd8337fcea183c153ed3420a158ec05be9fe566e9d2c364\": rpc error: code = NotFound desc = could not find container \"d7ed2c11d1f224cd3fd8337fcea183c153ed3420a158ec05be9fe566e9d2c364\": container with ID starting with d7ed2c11d1f224cd3fd8337fcea183c153ed3420a158ec05be9fe566e9d2c364 not found: ID does not exist" Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.375526 4925 scope.go:117] "RemoveContainer" containerID="2fae715c6db473f510df15f83910919c805b4eb9548e7b76103603a2ce937b38" Feb 02 12:02:23 crc kubenswrapper[4925]: E0202 12:02:23.375785 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fae715c6db473f510df15f83910919c805b4eb9548e7b76103603a2ce937b38\": container with ID starting with 2fae715c6db473f510df15f83910919c805b4eb9548e7b76103603a2ce937b38 not found: ID does not exist" containerID="2fae715c6db473f510df15f83910919c805b4eb9548e7b76103603a2ce937b38" Feb 02 12:02:23 crc kubenswrapper[4925]: I0202 12:02:23.375812 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fae715c6db473f510df15f83910919c805b4eb9548e7b76103603a2ce937b38"} err="failed to get container status \"2fae715c6db473f510df15f83910919c805b4eb9548e7b76103603a2ce937b38\": rpc error: code = NotFound desc = could not find container \"2fae715c6db473f510df15f83910919c805b4eb9548e7b76103603a2ce937b38\": container with ID starting with 2fae715c6db473f510df15f83910919c805b4eb9548e7b76103603a2ce937b38 not found: ID does not exist" Feb 02 12:02:24 crc kubenswrapper[4925]: I0202 12:02:24.674805 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f785e96-cf71-4bd5-bd96-bd589e582244" path="/var/lib/kubelet/pods/4f785e96-cf71-4bd5-bd96-bd589e582244/volumes" Feb 02 12:02:33 crc kubenswrapper[4925]: I0202 12:02:33.664893 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:02:33 crc kubenswrapper[4925]: E0202 12:02:33.665888 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:02:46 crc kubenswrapper[4925]: I0202 12:02:46.665008 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:02:46 crc kubenswrapper[4925]: E0202 12:02:46.665888 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:02:57 crc kubenswrapper[4925]: I0202 12:02:57.664589 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:02:57 crc kubenswrapper[4925]: E0202 12:02:57.665478 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:03:11 crc kubenswrapper[4925]: I0202 12:03:11.664971 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:03:11 crc kubenswrapper[4925]: E0202 12:03:11.665965 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:03:22 crc kubenswrapper[4925]: I0202 12:03:22.664354 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:03:22 crc kubenswrapper[4925]: E0202 12:03:22.665153 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:03:33 crc kubenswrapper[4925]: I0202 12:03:33.664607 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:03:33 crc kubenswrapper[4925]: E0202 12:03:33.665481 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:03:45 crc kubenswrapper[4925]: I0202 12:03:45.664699 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:03:45 crc kubenswrapper[4925]: E0202 12:03:45.665435 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:03:58 crc kubenswrapper[4925]: I0202 12:03:58.664524 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:03:58 crc kubenswrapper[4925]: E0202 12:03:58.665312 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:04:13 crc kubenswrapper[4925]: I0202 12:04:13.664877 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:04:14 crc kubenswrapper[4925]: I0202 12:04:14.230490 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"ca5eb2d565a340a933e43fe7e8773142e51abcef016e3b43a528750dcfc81145"} Feb 02 12:05:04 crc kubenswrapper[4925]: I0202 12:05:04.844892 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vw2qn"] Feb 02 12:05:04 crc kubenswrapper[4925]: E0202 12:05:04.846770 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f785e96-cf71-4bd5-bd96-bd589e582244" containerName="extract-content" Feb 02 12:05:04 crc kubenswrapper[4925]: I0202 12:05:04.846786 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f785e96-cf71-4bd5-bd96-bd589e582244" containerName="extract-content" Feb 02 12:05:04 crc kubenswrapper[4925]: E0202 12:05:04.846801 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f785e96-cf71-4bd5-bd96-bd589e582244" containerName="extract-utilities" Feb 02 12:05:04 crc kubenswrapper[4925]: I0202 12:05:04.846809 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f785e96-cf71-4bd5-bd96-bd589e582244" containerName="extract-utilities" Feb 02 12:05:04 crc kubenswrapper[4925]: E0202 12:05:04.846830 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f785e96-cf71-4bd5-bd96-bd589e582244" containerName="registry-server" Feb 02 12:05:04 crc kubenswrapper[4925]: I0202 12:05:04.846836 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f785e96-cf71-4bd5-bd96-bd589e582244" containerName="registry-server" Feb 02 12:05:04 crc kubenswrapper[4925]: I0202 12:05:04.848487 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f785e96-cf71-4bd5-bd96-bd589e582244" containerName="registry-server" Feb 02 12:05:04 crc kubenswrapper[4925]: I0202 12:05:04.850023 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vw2qn" Feb 02 12:05:04 crc kubenswrapper[4925]: I0202 12:05:04.860294 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vw2qn"] Feb 02 12:05:04 crc kubenswrapper[4925]: I0202 12:05:04.948720 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a909afa-1335-4fc5-aa24-ed260e207a64-catalog-content\") pod \"redhat-marketplace-vw2qn\" (UID: \"7a909afa-1335-4fc5-aa24-ed260e207a64\") " pod="openshift-marketplace/redhat-marketplace-vw2qn" Feb 02 12:05:04 crc kubenswrapper[4925]: I0202 12:05:04.948788 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a909afa-1335-4fc5-aa24-ed260e207a64-utilities\") pod \"redhat-marketplace-vw2qn\" (UID: \"7a909afa-1335-4fc5-aa24-ed260e207a64\") " pod="openshift-marketplace/redhat-marketplace-vw2qn" Feb 02 12:05:04 crc kubenswrapper[4925]: I0202 12:05:04.948898 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md2dn\" (UniqueName: \"kubernetes.io/projected/7a909afa-1335-4fc5-aa24-ed260e207a64-kube-api-access-md2dn\") pod \"redhat-marketplace-vw2qn\" (UID: \"7a909afa-1335-4fc5-aa24-ed260e207a64\") " pod="openshift-marketplace/redhat-marketplace-vw2qn" Feb 02 12:05:05 crc kubenswrapper[4925]: I0202 12:05:05.050811 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md2dn\" (UniqueName: \"kubernetes.io/projected/7a909afa-1335-4fc5-aa24-ed260e207a64-kube-api-access-md2dn\") pod \"redhat-marketplace-vw2qn\" (UID: \"7a909afa-1335-4fc5-aa24-ed260e207a64\") " pod="openshift-marketplace/redhat-marketplace-vw2qn" Feb 02 12:05:05 crc kubenswrapper[4925]: I0202 12:05:05.050931 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a909afa-1335-4fc5-aa24-ed260e207a64-catalog-content\") pod \"redhat-marketplace-vw2qn\" (UID: \"7a909afa-1335-4fc5-aa24-ed260e207a64\") " pod="openshift-marketplace/redhat-marketplace-vw2qn" Feb 02 12:05:05 crc kubenswrapper[4925]: I0202 12:05:05.050988 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a909afa-1335-4fc5-aa24-ed260e207a64-utilities\") pod \"redhat-marketplace-vw2qn\" (UID: \"7a909afa-1335-4fc5-aa24-ed260e207a64\") " pod="openshift-marketplace/redhat-marketplace-vw2qn" Feb 02 12:05:05 crc kubenswrapper[4925]: I0202 12:05:05.051533 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a909afa-1335-4fc5-aa24-ed260e207a64-utilities\") pod \"redhat-marketplace-vw2qn\" (UID: \"7a909afa-1335-4fc5-aa24-ed260e207a64\") " pod="openshift-marketplace/redhat-marketplace-vw2qn" Feb 02 12:05:05 crc kubenswrapper[4925]: I0202 12:05:05.051639 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a909afa-1335-4fc5-aa24-ed260e207a64-catalog-content\") pod \"redhat-marketplace-vw2qn\" (UID: \"7a909afa-1335-4fc5-aa24-ed260e207a64\") " pod="openshift-marketplace/redhat-marketplace-vw2qn" Feb 02 12:05:05 crc kubenswrapper[4925]: I0202 12:05:05.075230 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md2dn\" (UniqueName: \"kubernetes.io/projected/7a909afa-1335-4fc5-aa24-ed260e207a64-kube-api-access-md2dn\") pod \"redhat-marketplace-vw2qn\" (UID: \"7a909afa-1335-4fc5-aa24-ed260e207a64\") " pod="openshift-marketplace/redhat-marketplace-vw2qn" Feb 02 12:05:05 crc kubenswrapper[4925]: I0202 12:05:05.176221 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vw2qn" Feb 02 12:05:05 crc kubenswrapper[4925]: I0202 12:05:05.713787 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vw2qn"] Feb 02 12:05:06 crc kubenswrapper[4925]: I0202 12:05:06.676368 4925 generic.go:334] "Generic (PLEG): container finished" podID="7a909afa-1335-4fc5-aa24-ed260e207a64" containerID="b199f6a0917fd3f74bbb9b9e88c9a0f91b2970dd71f8aef803ee01fb3bc0c17a" exitCode=0 Feb 02 12:05:06 crc kubenswrapper[4925]: I0202 12:05:06.679608 4925 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 12:05:06 crc kubenswrapper[4925]: I0202 12:05:06.679844 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw2qn" event={"ID":"7a909afa-1335-4fc5-aa24-ed260e207a64","Type":"ContainerDied","Data":"b199f6a0917fd3f74bbb9b9e88c9a0f91b2970dd71f8aef803ee01fb3bc0c17a"} Feb 02 12:05:06 crc kubenswrapper[4925]: I0202 12:05:06.679892 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw2qn" event={"ID":"7a909afa-1335-4fc5-aa24-ed260e207a64","Type":"ContainerStarted","Data":"25bb0afbabac14baafa77dab051c601a44000ba3309f99db9514f947db3ff7b6"} Feb 02 12:05:08 crc kubenswrapper[4925]: I0202 12:05:08.698584 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw2qn" event={"ID":"7a909afa-1335-4fc5-aa24-ed260e207a64","Type":"ContainerStarted","Data":"42aeba71be76398833ddecd75e67381178478bf45eabfb3b05904766e885abb3"} Feb 02 12:05:09 crc kubenswrapper[4925]: I0202 12:05:09.707961 4925 generic.go:334] "Generic (PLEG): container finished" podID="7a909afa-1335-4fc5-aa24-ed260e207a64" containerID="42aeba71be76398833ddecd75e67381178478bf45eabfb3b05904766e885abb3" exitCode=0 Feb 02 12:05:09 crc kubenswrapper[4925]: I0202 12:05:09.708021 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw2qn" event={"ID":"7a909afa-1335-4fc5-aa24-ed260e207a64","Type":"ContainerDied","Data":"42aeba71be76398833ddecd75e67381178478bf45eabfb3b05904766e885abb3"} Feb 02 12:05:10 crc kubenswrapper[4925]: I0202 12:05:10.718464 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw2qn" event={"ID":"7a909afa-1335-4fc5-aa24-ed260e207a64","Type":"ContainerStarted","Data":"def42efe5ec5c440177ee47805bcb2208b757217b1bdb4b3200e78c91aac7421"} Feb 02 12:05:10 crc kubenswrapper[4925]: I0202 12:05:10.746504 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vw2qn" podStartSLOduration=3.305758881 podStartE2EDuration="6.746483879s" podCreationTimestamp="2026-02-02 12:05:04 +0000 UTC" firstStartedPulling="2026-02-02 12:05:06.679302204 +0000 UTC m=+4083.683551176" lastFinishedPulling="2026-02-02 12:05:10.120027212 +0000 UTC m=+4087.124276174" observedRunningTime="2026-02-02 12:05:10.734552236 +0000 UTC m=+4087.738801208" watchObservedRunningTime="2026-02-02 12:05:10.746483879 +0000 UTC m=+4087.750732841" Feb 02 12:05:15 crc kubenswrapper[4925]: I0202 12:05:15.177140 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vw2qn" Feb 02 12:05:15 crc kubenswrapper[4925]: I0202 12:05:15.177621 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vw2qn" Feb 02 12:05:15 crc kubenswrapper[4925]: I0202 12:05:15.248782 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vw2qn" Feb 02 12:05:16 crc kubenswrapper[4925]: I0202 12:05:16.527129 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vw2qn" Feb 02 12:05:16 crc kubenswrapper[4925]: I0202 12:05:16.579341 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vw2qn"] Feb 02 12:05:17 crc kubenswrapper[4925]: I0202 12:05:17.769885 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vw2qn" podUID="7a909afa-1335-4fc5-aa24-ed260e207a64" containerName="registry-server" containerID="cri-o://def42efe5ec5c440177ee47805bcb2208b757217b1bdb4b3200e78c91aac7421" gracePeriod=2 Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.423458 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vw2qn" Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.522450 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a909afa-1335-4fc5-aa24-ed260e207a64-catalog-content\") pod \"7a909afa-1335-4fc5-aa24-ed260e207a64\" (UID: \"7a909afa-1335-4fc5-aa24-ed260e207a64\") " Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.522655 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md2dn\" (UniqueName: \"kubernetes.io/projected/7a909afa-1335-4fc5-aa24-ed260e207a64-kube-api-access-md2dn\") pod \"7a909afa-1335-4fc5-aa24-ed260e207a64\" (UID: \"7a909afa-1335-4fc5-aa24-ed260e207a64\") " Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.522756 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a909afa-1335-4fc5-aa24-ed260e207a64-utilities\") pod \"7a909afa-1335-4fc5-aa24-ed260e207a64\" (UID: \"7a909afa-1335-4fc5-aa24-ed260e207a64\") " Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.524128 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a909afa-1335-4fc5-aa24-ed260e207a64-utilities" (OuterVolumeSpecName: "utilities") pod "7a909afa-1335-4fc5-aa24-ed260e207a64" (UID: "7a909afa-1335-4fc5-aa24-ed260e207a64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.530194 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a909afa-1335-4fc5-aa24-ed260e207a64-kube-api-access-md2dn" (OuterVolumeSpecName: "kube-api-access-md2dn") pod "7a909afa-1335-4fc5-aa24-ed260e207a64" (UID: "7a909afa-1335-4fc5-aa24-ed260e207a64"). InnerVolumeSpecName "kube-api-access-md2dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.540846 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a909afa-1335-4fc5-aa24-ed260e207a64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a909afa-1335-4fc5-aa24-ed260e207a64" (UID: "7a909afa-1335-4fc5-aa24-ed260e207a64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.625175 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md2dn\" (UniqueName: \"kubernetes.io/projected/7a909afa-1335-4fc5-aa24-ed260e207a64-kube-api-access-md2dn\") on node \"crc\" DevicePath \"\"" Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.625207 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a909afa-1335-4fc5-aa24-ed260e207a64-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.625217 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a909afa-1335-4fc5-aa24-ed260e207a64-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.780234 4925 generic.go:334] "Generic (PLEG): container finished" podID="7a909afa-1335-4fc5-aa24-ed260e207a64" containerID="def42efe5ec5c440177ee47805bcb2208b757217b1bdb4b3200e78c91aac7421" exitCode=0 Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.780299 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw2qn" event={"ID":"7a909afa-1335-4fc5-aa24-ed260e207a64","Type":"ContainerDied","Data":"def42efe5ec5c440177ee47805bcb2208b757217b1bdb4b3200e78c91aac7421"} Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.780330 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw2qn" event={"ID":"7a909afa-1335-4fc5-aa24-ed260e207a64","Type":"ContainerDied","Data":"25bb0afbabac14baafa77dab051c601a44000ba3309f99db9514f947db3ff7b6"} Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.780349 4925 scope.go:117] "RemoveContainer" containerID="def42efe5ec5c440177ee47805bcb2208b757217b1bdb4b3200e78c91aac7421" Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.780304 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vw2qn" Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.804052 4925 scope.go:117] "RemoveContainer" containerID="42aeba71be76398833ddecd75e67381178478bf45eabfb3b05904766e885abb3" Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.808237 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vw2qn"] Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.816663 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vw2qn"] Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.825470 4925 scope.go:117] "RemoveContainer" containerID="b199f6a0917fd3f74bbb9b9e88c9a0f91b2970dd71f8aef803ee01fb3bc0c17a" Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.869832 4925 scope.go:117] "RemoveContainer" containerID="def42efe5ec5c440177ee47805bcb2208b757217b1bdb4b3200e78c91aac7421" Feb 02 12:05:18 crc kubenswrapper[4925]: E0202 12:05:18.871171 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"def42efe5ec5c440177ee47805bcb2208b757217b1bdb4b3200e78c91aac7421\": container with ID starting with def42efe5ec5c440177ee47805bcb2208b757217b1bdb4b3200e78c91aac7421 not found: ID does not exist" containerID="def42efe5ec5c440177ee47805bcb2208b757217b1bdb4b3200e78c91aac7421" Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.871200 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"def42efe5ec5c440177ee47805bcb2208b757217b1bdb4b3200e78c91aac7421"} err="failed to get container status \"def42efe5ec5c440177ee47805bcb2208b757217b1bdb4b3200e78c91aac7421\": rpc error: code = NotFound desc = could not find container \"def42efe5ec5c440177ee47805bcb2208b757217b1bdb4b3200e78c91aac7421\": container with ID starting with def42efe5ec5c440177ee47805bcb2208b757217b1bdb4b3200e78c91aac7421 not found: ID does not exist" Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.871219 4925 scope.go:117] "RemoveContainer" containerID="42aeba71be76398833ddecd75e67381178478bf45eabfb3b05904766e885abb3" Feb 02 12:05:18 crc kubenswrapper[4925]: E0202 12:05:18.871574 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42aeba71be76398833ddecd75e67381178478bf45eabfb3b05904766e885abb3\": container with ID starting with 42aeba71be76398833ddecd75e67381178478bf45eabfb3b05904766e885abb3 not found: ID does not exist" containerID="42aeba71be76398833ddecd75e67381178478bf45eabfb3b05904766e885abb3" Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.871598 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42aeba71be76398833ddecd75e67381178478bf45eabfb3b05904766e885abb3"} err="failed to get container status \"42aeba71be76398833ddecd75e67381178478bf45eabfb3b05904766e885abb3\": rpc error: code = NotFound desc = could not find container \"42aeba71be76398833ddecd75e67381178478bf45eabfb3b05904766e885abb3\": container with ID starting with 42aeba71be76398833ddecd75e67381178478bf45eabfb3b05904766e885abb3 not found: ID does not exist" Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.871610 4925 scope.go:117] "RemoveContainer" containerID="b199f6a0917fd3f74bbb9b9e88c9a0f91b2970dd71f8aef803ee01fb3bc0c17a" Feb 02 12:05:18 crc kubenswrapper[4925]: E0202 12:05:18.871926 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b199f6a0917fd3f74bbb9b9e88c9a0f91b2970dd71f8aef803ee01fb3bc0c17a\": container with ID starting with b199f6a0917fd3f74bbb9b9e88c9a0f91b2970dd71f8aef803ee01fb3bc0c17a not found: ID does not exist" containerID="b199f6a0917fd3f74bbb9b9e88c9a0f91b2970dd71f8aef803ee01fb3bc0c17a" Feb 02 12:05:18 crc kubenswrapper[4925]: I0202 12:05:18.871947 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b199f6a0917fd3f74bbb9b9e88c9a0f91b2970dd71f8aef803ee01fb3bc0c17a"} err="failed to get container status \"b199f6a0917fd3f74bbb9b9e88c9a0f91b2970dd71f8aef803ee01fb3bc0c17a\": rpc error: code = NotFound desc = could not find container \"b199f6a0917fd3f74bbb9b9e88c9a0f91b2970dd71f8aef803ee01fb3bc0c17a\": container with ID starting with b199f6a0917fd3f74bbb9b9e88c9a0f91b2970dd71f8aef803ee01fb3bc0c17a not found: ID does not exist" Feb 02 12:05:20 crc kubenswrapper[4925]: I0202 12:05:20.681038 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a909afa-1335-4fc5-aa24-ed260e207a64" path="/var/lib/kubelet/pods/7a909afa-1335-4fc5-aa24-ed260e207a64/volumes" Feb 02 12:05:30 crc kubenswrapper[4925]: I0202 12:05:30.837288 4925 scope.go:117] "RemoveContainer" containerID="d6bb76830e4e079a079b021e33403155ed6edbe7a857e4a751324efc13493197" Feb 02 12:05:30 crc kubenswrapper[4925]: I0202 12:05:30.860183 4925 scope.go:117] "RemoveContainer" containerID="c0be1d774d5e86a49a99a4f513e598fc114045b390ec65e3301eb91257a5a4d6" Feb 02 12:05:30 crc kubenswrapper[4925]: I0202 12:05:30.880185 4925 scope.go:117] "RemoveContainer" containerID="b5e5b77a8fbdc75168c1956663cef8ea1c942e755da95cb6c776049ca38f9118" Feb 02 12:06:13 crc kubenswrapper[4925]: I0202 12:06:13.398419 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:06:13 crc kubenswrapper[4925]: I0202 12:06:13.398960 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:06:43 crc kubenswrapper[4925]: I0202 12:06:43.404891 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:06:43 crc kubenswrapper[4925]: I0202 12:06:43.405541 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:07:13 crc kubenswrapper[4925]: I0202 12:07:13.398911 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:07:13 crc kubenswrapper[4925]: I0202 12:07:13.399506 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:07:13 crc kubenswrapper[4925]: I0202 12:07:13.399556 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 12:07:13 crc kubenswrapper[4925]: I0202 12:07:13.400318 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca5eb2d565a340a933e43fe7e8773142e51abcef016e3b43a528750dcfc81145"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 12:07:13 crc kubenswrapper[4925]: I0202 12:07:13.400370 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://ca5eb2d565a340a933e43fe7e8773142e51abcef016e3b43a528750dcfc81145" gracePeriod=600 Feb 02 12:07:13 crc kubenswrapper[4925]: I0202 12:07:13.766878 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="ca5eb2d565a340a933e43fe7e8773142e51abcef016e3b43a528750dcfc81145" exitCode=0 Feb 02 12:07:13 crc kubenswrapper[4925]: I0202 12:07:13.766950 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"ca5eb2d565a340a933e43fe7e8773142e51abcef016e3b43a528750dcfc81145"} Feb 02 12:07:13 crc kubenswrapper[4925]: I0202 12:07:13.767589 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282"} Feb 02 12:07:13 crc kubenswrapper[4925]: I0202 12:07:13.767627 4925 scope.go:117] "RemoveContainer" containerID="b4ff01a22e0456f3bf1fc4eb4641f3950e168605386013080a51cfad7c604262" Feb 02 12:08:34 crc kubenswrapper[4925]: I0202 12:08:34.237186 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7f5js"] Feb 02 12:08:34 crc kubenswrapper[4925]: E0202 12:08:34.238382 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a909afa-1335-4fc5-aa24-ed260e207a64" containerName="extract-content" Feb 02 12:08:34 crc kubenswrapper[4925]: I0202 12:08:34.238400 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a909afa-1335-4fc5-aa24-ed260e207a64" containerName="extract-content" Feb 02 12:08:34 crc kubenswrapper[4925]: E0202 12:08:34.238418 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a909afa-1335-4fc5-aa24-ed260e207a64" containerName="extract-utilities" Feb 02 12:08:34 crc kubenswrapper[4925]: I0202 12:08:34.238426 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a909afa-1335-4fc5-aa24-ed260e207a64" containerName="extract-utilities" Feb 02 12:08:34 crc kubenswrapper[4925]: E0202 12:08:34.238455 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a909afa-1335-4fc5-aa24-ed260e207a64" containerName="registry-server" Feb 02 12:08:34 crc kubenswrapper[4925]: I0202 12:08:34.238464 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a909afa-1335-4fc5-aa24-ed260e207a64" containerName="registry-server" Feb 02 12:08:34 crc kubenswrapper[4925]: I0202 12:08:34.238692 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a909afa-1335-4fc5-aa24-ed260e207a64" containerName="registry-server" Feb 02 12:08:34 crc kubenswrapper[4925]: I0202 12:08:34.240048 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7f5js" Feb 02 12:08:34 crc kubenswrapper[4925]: I0202 12:08:34.255292 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7f5js"] Feb 02 12:08:34 crc kubenswrapper[4925]: I0202 12:08:34.360679 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vshsd\" (UniqueName: \"kubernetes.io/projected/0b026c65-597a-4893-8d09-f0caf67db472-kube-api-access-vshsd\") pod \"community-operators-7f5js\" (UID: \"0b026c65-597a-4893-8d09-f0caf67db472\") " pod="openshift-marketplace/community-operators-7f5js" Feb 02 12:08:34 crc kubenswrapper[4925]: I0202 12:08:34.360748 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b026c65-597a-4893-8d09-f0caf67db472-utilities\") pod \"community-operators-7f5js\" (UID: \"0b026c65-597a-4893-8d09-f0caf67db472\") " pod="openshift-marketplace/community-operators-7f5js" Feb 02 12:08:34 crc kubenswrapper[4925]: I0202 12:08:34.360781 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b026c65-597a-4893-8d09-f0caf67db472-catalog-content\") pod \"community-operators-7f5js\" (UID: \"0b026c65-597a-4893-8d09-f0caf67db472\") " pod="openshift-marketplace/community-operators-7f5js" Feb 02 12:08:34 crc kubenswrapper[4925]: I0202 12:08:34.462891 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b026c65-597a-4893-8d09-f0caf67db472-utilities\") pod \"community-operators-7f5js\" (UID: \"0b026c65-597a-4893-8d09-f0caf67db472\") " pod="openshift-marketplace/community-operators-7f5js" Feb 02 12:08:34 crc kubenswrapper[4925]: I0202 12:08:34.462976 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b026c65-597a-4893-8d09-f0caf67db472-catalog-content\") pod \"community-operators-7f5js\" (UID: \"0b026c65-597a-4893-8d09-f0caf67db472\") " pod="openshift-marketplace/community-operators-7f5js" Feb 02 12:08:34 crc kubenswrapper[4925]: I0202 12:08:34.463237 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vshsd\" (UniqueName: \"kubernetes.io/projected/0b026c65-597a-4893-8d09-f0caf67db472-kube-api-access-vshsd\") pod \"community-operators-7f5js\" (UID: \"0b026c65-597a-4893-8d09-f0caf67db472\") " pod="openshift-marketplace/community-operators-7f5js" Feb 02 12:08:34 crc kubenswrapper[4925]: I0202 12:08:34.464262 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b026c65-597a-4893-8d09-f0caf67db472-utilities\") pod \"community-operators-7f5js\" (UID: \"0b026c65-597a-4893-8d09-f0caf67db472\") " pod="openshift-marketplace/community-operators-7f5js" Feb 02 12:08:34 crc kubenswrapper[4925]: I0202 12:08:34.464549 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b026c65-597a-4893-8d09-f0caf67db472-catalog-content\") pod \"community-operators-7f5js\" (UID: \"0b026c65-597a-4893-8d09-f0caf67db472\") " pod="openshift-marketplace/community-operators-7f5js" Feb 02 12:08:34 crc kubenswrapper[4925]: I0202 12:08:34.485183 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vshsd\" (UniqueName: \"kubernetes.io/projected/0b026c65-597a-4893-8d09-f0caf67db472-kube-api-access-vshsd\") pod \"community-operators-7f5js\" (UID: \"0b026c65-597a-4893-8d09-f0caf67db472\") " pod="openshift-marketplace/community-operators-7f5js" Feb 02 12:08:34 crc kubenswrapper[4925]: I0202 12:08:34.557781 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7f5js" Feb 02 12:08:35 crc kubenswrapper[4925]: I0202 12:08:35.084483 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7f5js"] Feb 02 12:08:35 crc kubenswrapper[4925]: I0202 12:08:35.490875 4925 generic.go:334] "Generic (PLEG): container finished" podID="0b026c65-597a-4893-8d09-f0caf67db472" containerID="8584718b32f8ae9bf1fc7049ac0fbad7ae48dc0a2d093ea9c4de18f4ecc80546" exitCode=0 Feb 02 12:08:35 crc kubenswrapper[4925]: I0202 12:08:35.490944 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7f5js" event={"ID":"0b026c65-597a-4893-8d09-f0caf67db472","Type":"ContainerDied","Data":"8584718b32f8ae9bf1fc7049ac0fbad7ae48dc0a2d093ea9c4de18f4ecc80546"} Feb 02 12:08:35 crc kubenswrapper[4925]: I0202 12:08:35.491276 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7f5js" event={"ID":"0b026c65-597a-4893-8d09-f0caf67db472","Type":"ContainerStarted","Data":"56135d5d5829fd6f47c137895faa5ea4f62cf1de4ec8b3d68c8977c07597f0ac"} Feb 02 12:08:37 crc kubenswrapper[4925]: I0202 12:08:37.507925 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7f5js" event={"ID":"0b026c65-597a-4893-8d09-f0caf67db472","Type":"ContainerStarted","Data":"c8bd885f39f2b019b4fbc2b4d26f6d2de96744ae34a138fe10bcfb07e9a41c34"} Feb 02 12:08:38 crc kubenswrapper[4925]: I0202 12:08:38.517007 4925 generic.go:334] "Generic (PLEG): container finished" podID="0b026c65-597a-4893-8d09-f0caf67db472" containerID="c8bd885f39f2b019b4fbc2b4d26f6d2de96744ae34a138fe10bcfb07e9a41c34" exitCode=0 Feb 02 12:08:38 crc kubenswrapper[4925]: I0202 12:08:38.517111 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7f5js" event={"ID":"0b026c65-597a-4893-8d09-f0caf67db472","Type":"ContainerDied","Data":"c8bd885f39f2b019b4fbc2b4d26f6d2de96744ae34a138fe10bcfb07e9a41c34"} Feb 02 12:08:39 crc kubenswrapper[4925]: I0202 12:08:39.537707 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7f5js" event={"ID":"0b026c65-597a-4893-8d09-f0caf67db472","Type":"ContainerStarted","Data":"e792c2e76941ee307bc3148be642e5a32f7669bf0add7a380218bf345df4b75f"} Feb 02 12:08:44 crc kubenswrapper[4925]: I0202 12:08:44.558421 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7f5js" Feb 02 12:08:44 crc kubenswrapper[4925]: I0202 12:08:44.558910 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7f5js" Feb 02 12:08:44 crc kubenswrapper[4925]: I0202 12:08:44.615462 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7f5js" Feb 02 12:08:44 crc kubenswrapper[4925]: I0202 12:08:44.637780 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7f5js" podStartSLOduration=7.227767233 podStartE2EDuration="10.637762437s" podCreationTimestamp="2026-02-02 12:08:34 +0000 UTC" firstStartedPulling="2026-02-02 12:08:35.493805942 +0000 UTC m=+4292.498054904" lastFinishedPulling="2026-02-02 12:08:38.903801146 +0000 UTC m=+4295.908050108" observedRunningTime="2026-02-02 12:08:39.56142777 +0000 UTC m=+4296.565676732" watchObservedRunningTime="2026-02-02 12:08:44.637762437 +0000 UTC m=+4301.642011399" Feb 02 12:08:44 crc kubenswrapper[4925]: I0202 12:08:44.675795 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7f5js" Feb 02 12:08:45 crc kubenswrapper[4925]: I0202 12:08:45.422654 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7f5js"] Feb 02 12:08:46 crc kubenswrapper[4925]: I0202 12:08:46.595908 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7f5js" podUID="0b026c65-597a-4893-8d09-f0caf67db472" containerName="registry-server" containerID="cri-o://e792c2e76941ee307bc3148be642e5a32f7669bf0add7a380218bf345df4b75f" gracePeriod=2 Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.229520 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7f5js" Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.319697 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b026c65-597a-4893-8d09-f0caf67db472-catalog-content\") pod \"0b026c65-597a-4893-8d09-f0caf67db472\" (UID: \"0b026c65-597a-4893-8d09-f0caf67db472\") " Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.319760 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vshsd\" (UniqueName: \"kubernetes.io/projected/0b026c65-597a-4893-8d09-f0caf67db472-kube-api-access-vshsd\") pod \"0b026c65-597a-4893-8d09-f0caf67db472\" (UID: \"0b026c65-597a-4893-8d09-f0caf67db472\") " Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.319846 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b026c65-597a-4893-8d09-f0caf67db472-utilities\") pod \"0b026c65-597a-4893-8d09-f0caf67db472\" (UID: \"0b026c65-597a-4893-8d09-f0caf67db472\") " Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.320990 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b026c65-597a-4893-8d09-f0caf67db472-utilities" (OuterVolumeSpecName: "utilities") pod "0b026c65-597a-4893-8d09-f0caf67db472" (UID: "0b026c65-597a-4893-8d09-f0caf67db472"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.340477 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b026c65-597a-4893-8d09-f0caf67db472-kube-api-access-vshsd" (OuterVolumeSpecName: "kube-api-access-vshsd") pod "0b026c65-597a-4893-8d09-f0caf67db472" (UID: "0b026c65-597a-4893-8d09-f0caf67db472"). InnerVolumeSpecName "kube-api-access-vshsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.392679 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b026c65-597a-4893-8d09-f0caf67db472-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b026c65-597a-4893-8d09-f0caf67db472" (UID: "0b026c65-597a-4893-8d09-f0caf67db472"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.423319 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b026c65-597a-4893-8d09-f0caf67db472-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.423364 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vshsd\" (UniqueName: \"kubernetes.io/projected/0b026c65-597a-4893-8d09-f0caf67db472-kube-api-access-vshsd\") on node \"crc\" DevicePath \"\"" Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.423378 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b026c65-597a-4893-8d09-f0caf67db472-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.605703 4925 generic.go:334] "Generic (PLEG): container finished" podID="0b026c65-597a-4893-8d09-f0caf67db472" containerID="e792c2e76941ee307bc3148be642e5a32f7669bf0add7a380218bf345df4b75f" exitCode=0 Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.605745 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7f5js" event={"ID":"0b026c65-597a-4893-8d09-f0caf67db472","Type":"ContainerDied","Data":"e792c2e76941ee307bc3148be642e5a32f7669bf0add7a380218bf345df4b75f"} Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.605800 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7f5js" Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.605828 4925 scope.go:117] "RemoveContainer" containerID="e792c2e76941ee307bc3148be642e5a32f7669bf0add7a380218bf345df4b75f" Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.605813 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7f5js" event={"ID":"0b026c65-597a-4893-8d09-f0caf67db472","Type":"ContainerDied","Data":"56135d5d5829fd6f47c137895faa5ea4f62cf1de4ec8b3d68c8977c07597f0ac"} Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.630781 4925 scope.go:117] "RemoveContainer" containerID="c8bd885f39f2b019b4fbc2b4d26f6d2de96744ae34a138fe10bcfb07e9a41c34" Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.638706 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7f5js"] Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.646354 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7f5js"] Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.664330 4925 scope.go:117] "RemoveContainer" containerID="8584718b32f8ae9bf1fc7049ac0fbad7ae48dc0a2d093ea9c4de18f4ecc80546" Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.693131 4925 scope.go:117] "RemoveContainer" containerID="e792c2e76941ee307bc3148be642e5a32f7669bf0add7a380218bf345df4b75f" Feb 02 12:08:47 crc kubenswrapper[4925]: E0202 12:08:47.693657 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e792c2e76941ee307bc3148be642e5a32f7669bf0add7a380218bf345df4b75f\": container with ID starting with e792c2e76941ee307bc3148be642e5a32f7669bf0add7a380218bf345df4b75f not found: ID does not exist" containerID="e792c2e76941ee307bc3148be642e5a32f7669bf0add7a380218bf345df4b75f" Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.693709 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e792c2e76941ee307bc3148be642e5a32f7669bf0add7a380218bf345df4b75f"} err="failed to get container status \"e792c2e76941ee307bc3148be642e5a32f7669bf0add7a380218bf345df4b75f\": rpc error: code = NotFound desc = could not find container \"e792c2e76941ee307bc3148be642e5a32f7669bf0add7a380218bf345df4b75f\": container with ID starting with e792c2e76941ee307bc3148be642e5a32f7669bf0add7a380218bf345df4b75f not found: ID does not exist" Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.693740 4925 scope.go:117] "RemoveContainer" containerID="c8bd885f39f2b019b4fbc2b4d26f6d2de96744ae34a138fe10bcfb07e9a41c34" Feb 02 12:08:47 crc kubenswrapper[4925]: E0202 12:08:47.694102 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8bd885f39f2b019b4fbc2b4d26f6d2de96744ae34a138fe10bcfb07e9a41c34\": container with ID starting with c8bd885f39f2b019b4fbc2b4d26f6d2de96744ae34a138fe10bcfb07e9a41c34 not found: ID does not exist" containerID="c8bd885f39f2b019b4fbc2b4d26f6d2de96744ae34a138fe10bcfb07e9a41c34" Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.694213 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8bd885f39f2b019b4fbc2b4d26f6d2de96744ae34a138fe10bcfb07e9a41c34"} err="failed to get container status \"c8bd885f39f2b019b4fbc2b4d26f6d2de96744ae34a138fe10bcfb07e9a41c34\": rpc error: code = NotFound desc = could not find container \"c8bd885f39f2b019b4fbc2b4d26f6d2de96744ae34a138fe10bcfb07e9a41c34\": container with ID starting with c8bd885f39f2b019b4fbc2b4d26f6d2de96744ae34a138fe10bcfb07e9a41c34 not found: ID does not exist" Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.694297 4925 scope.go:117] "RemoveContainer" containerID="8584718b32f8ae9bf1fc7049ac0fbad7ae48dc0a2d093ea9c4de18f4ecc80546" Feb 02 12:08:47 crc kubenswrapper[4925]: E0202 12:08:47.694753 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8584718b32f8ae9bf1fc7049ac0fbad7ae48dc0a2d093ea9c4de18f4ecc80546\": container with ID starting with 8584718b32f8ae9bf1fc7049ac0fbad7ae48dc0a2d093ea9c4de18f4ecc80546 not found: ID does not exist" containerID="8584718b32f8ae9bf1fc7049ac0fbad7ae48dc0a2d093ea9c4de18f4ecc80546" Feb 02 12:08:47 crc kubenswrapper[4925]: I0202 12:08:47.694843 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8584718b32f8ae9bf1fc7049ac0fbad7ae48dc0a2d093ea9c4de18f4ecc80546"} err="failed to get container status \"8584718b32f8ae9bf1fc7049ac0fbad7ae48dc0a2d093ea9c4de18f4ecc80546\": rpc error: code = NotFound desc = could not find container \"8584718b32f8ae9bf1fc7049ac0fbad7ae48dc0a2d093ea9c4de18f4ecc80546\": container with ID starting with 8584718b32f8ae9bf1fc7049ac0fbad7ae48dc0a2d093ea9c4de18f4ecc80546 not found: ID does not exist" Feb 02 12:08:48 crc kubenswrapper[4925]: I0202 12:08:48.674760 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b026c65-597a-4893-8d09-f0caf67db472" path="/var/lib/kubelet/pods/0b026c65-597a-4893-8d09-f0caf67db472/volumes" Feb 02 12:09:13 crc kubenswrapper[4925]: I0202 12:09:13.399315 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:09:13 crc kubenswrapper[4925]: I0202 12:09:13.399860 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:09:43 crc kubenswrapper[4925]: I0202 12:09:43.398995 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:09:43 crc kubenswrapper[4925]: I0202 12:09:43.399621 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:10:13 crc kubenswrapper[4925]: I0202 12:10:13.399217 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:10:13 crc kubenswrapper[4925]: I0202 12:10:13.399593 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:10:13 crc kubenswrapper[4925]: I0202 12:10:13.399638 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 12:10:13 crc kubenswrapper[4925]: I0202 12:10:13.400431 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 12:10:13 crc kubenswrapper[4925]: I0202 12:10:13.400485 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" gracePeriod=600 Feb 02 12:10:13 crc kubenswrapper[4925]: E0202 12:10:13.621500 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:10:14 crc kubenswrapper[4925]: I0202 12:10:14.348790 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" exitCode=0 Feb 02 12:10:14 crc kubenswrapper[4925]: I0202 12:10:14.348862 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282"} Feb 02 12:10:14 crc kubenswrapper[4925]: I0202 12:10:14.349209 4925 scope.go:117] "RemoveContainer" containerID="ca5eb2d565a340a933e43fe7e8773142e51abcef016e3b43a528750dcfc81145" Feb 02 12:10:14 crc kubenswrapper[4925]: I0202 12:10:14.351525 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:10:14 crc kubenswrapper[4925]: E0202 12:10:14.352155 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:10:25 crc kubenswrapper[4925]: I0202 12:10:25.664528 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:10:25 crc kubenswrapper[4925]: E0202 12:10:25.665530 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:10:25 crc kubenswrapper[4925]: I0202 12:10:25.883940 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zm42t"] Feb 02 12:10:25 crc kubenswrapper[4925]: E0202 12:10:25.884773 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b026c65-597a-4893-8d09-f0caf67db472" containerName="extract-utilities" Feb 02 12:10:25 crc kubenswrapper[4925]: I0202 12:10:25.884796 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b026c65-597a-4893-8d09-f0caf67db472" containerName="extract-utilities" Feb 02 12:10:25 crc kubenswrapper[4925]: E0202 12:10:25.884820 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b026c65-597a-4893-8d09-f0caf67db472" containerName="registry-server" Feb 02 12:10:25 crc kubenswrapper[4925]: I0202 12:10:25.884828 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b026c65-597a-4893-8d09-f0caf67db472" containerName="registry-server" Feb 02 12:10:25 crc kubenswrapper[4925]: E0202 12:10:25.884853 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b026c65-597a-4893-8d09-f0caf67db472" containerName="extract-content" Feb 02 12:10:25 crc kubenswrapper[4925]: I0202 12:10:25.884861 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b026c65-597a-4893-8d09-f0caf67db472" containerName="extract-content" Feb 02 12:10:25 crc kubenswrapper[4925]: I0202 12:10:25.885144 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b026c65-597a-4893-8d09-f0caf67db472" containerName="registry-server" Feb 02 12:10:25 crc kubenswrapper[4925]: I0202 12:10:25.886741 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zm42t" Feb 02 12:10:25 crc kubenswrapper[4925]: I0202 12:10:25.894798 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zm42t"] Feb 02 12:10:26 crc kubenswrapper[4925]: I0202 12:10:26.030790 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5923def9-dccb-400f-9d58-92fe290ded60-utilities\") pod \"redhat-operators-zm42t\" (UID: \"5923def9-dccb-400f-9d58-92fe290ded60\") " pod="openshift-marketplace/redhat-operators-zm42t" Feb 02 12:10:26 crc kubenswrapper[4925]: I0202 12:10:26.030984 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5923def9-dccb-400f-9d58-92fe290ded60-catalog-content\") pod \"redhat-operators-zm42t\" (UID: \"5923def9-dccb-400f-9d58-92fe290ded60\") " pod="openshift-marketplace/redhat-operators-zm42t" Feb 02 12:10:26 crc kubenswrapper[4925]: I0202 12:10:26.031133 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djq26\" (UniqueName: \"kubernetes.io/projected/5923def9-dccb-400f-9d58-92fe290ded60-kube-api-access-djq26\") pod \"redhat-operators-zm42t\" (UID: \"5923def9-dccb-400f-9d58-92fe290ded60\") " pod="openshift-marketplace/redhat-operators-zm42t" Feb 02 12:10:26 crc kubenswrapper[4925]: I0202 12:10:26.132994 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djq26\" (UniqueName: \"kubernetes.io/projected/5923def9-dccb-400f-9d58-92fe290ded60-kube-api-access-djq26\") pod \"redhat-operators-zm42t\" (UID: \"5923def9-dccb-400f-9d58-92fe290ded60\") " pod="openshift-marketplace/redhat-operators-zm42t" Feb 02 12:10:26 crc kubenswrapper[4925]: I0202 12:10:26.133266 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5923def9-dccb-400f-9d58-92fe290ded60-utilities\") pod \"redhat-operators-zm42t\" (UID: \"5923def9-dccb-400f-9d58-92fe290ded60\") " pod="openshift-marketplace/redhat-operators-zm42t" Feb 02 12:10:26 crc kubenswrapper[4925]: I0202 12:10:26.133394 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5923def9-dccb-400f-9d58-92fe290ded60-catalog-content\") pod \"redhat-operators-zm42t\" (UID: \"5923def9-dccb-400f-9d58-92fe290ded60\") " pod="openshift-marketplace/redhat-operators-zm42t" Feb 02 12:10:26 crc kubenswrapper[4925]: I0202 12:10:26.133915 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5923def9-dccb-400f-9d58-92fe290ded60-utilities\") pod \"redhat-operators-zm42t\" (UID: \"5923def9-dccb-400f-9d58-92fe290ded60\") " pod="openshift-marketplace/redhat-operators-zm42t" Feb 02 12:10:26 crc kubenswrapper[4925]: I0202 12:10:26.133961 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5923def9-dccb-400f-9d58-92fe290ded60-catalog-content\") pod \"redhat-operators-zm42t\" (UID: \"5923def9-dccb-400f-9d58-92fe290ded60\") " pod="openshift-marketplace/redhat-operators-zm42t" Feb 02 12:10:26 crc kubenswrapper[4925]: I0202 12:10:26.242404 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djq26\" (UniqueName: \"kubernetes.io/projected/5923def9-dccb-400f-9d58-92fe290ded60-kube-api-access-djq26\") pod \"redhat-operators-zm42t\" (UID: \"5923def9-dccb-400f-9d58-92fe290ded60\") " pod="openshift-marketplace/redhat-operators-zm42t" Feb 02 12:10:26 crc kubenswrapper[4925]: I0202 12:10:26.507252 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zm42t" Feb 02 12:10:27 crc kubenswrapper[4925]: I0202 12:10:27.015713 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zm42t"] Feb 02 12:10:27 crc kubenswrapper[4925]: I0202 12:10:27.463788 4925 generic.go:334] "Generic (PLEG): container finished" podID="5923def9-dccb-400f-9d58-92fe290ded60" containerID="facb0c0b9703ab5bbf9c644f4a4b4c4f4348aa64816067eac8c94b38b15cefbc" exitCode=0 Feb 02 12:10:27 crc kubenswrapper[4925]: I0202 12:10:27.463870 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm42t" event={"ID":"5923def9-dccb-400f-9d58-92fe290ded60","Type":"ContainerDied","Data":"facb0c0b9703ab5bbf9c644f4a4b4c4f4348aa64816067eac8c94b38b15cefbc"} Feb 02 12:10:27 crc kubenswrapper[4925]: I0202 12:10:27.464172 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm42t" event={"ID":"5923def9-dccb-400f-9d58-92fe290ded60","Type":"ContainerStarted","Data":"18c8b7499cebd53e05f269483ee05cc81639b98901fb34055c539feeb191af62"} Feb 02 12:10:27 crc kubenswrapper[4925]: I0202 12:10:27.465773 4925 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 12:10:29 crc kubenswrapper[4925]: I0202 12:10:29.485971 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm42t" event={"ID":"5923def9-dccb-400f-9d58-92fe290ded60","Type":"ContainerStarted","Data":"d3a53a87498aac6de54bb07b2b7e3f53aecb8609357382f80704df4ab3f7d4b2"} Feb 02 12:10:32 crc kubenswrapper[4925]: I0202 12:10:32.514691 4925 generic.go:334] "Generic (PLEG): container finished" podID="5923def9-dccb-400f-9d58-92fe290ded60" containerID="d3a53a87498aac6de54bb07b2b7e3f53aecb8609357382f80704df4ab3f7d4b2" exitCode=0 Feb 02 12:10:32 crc kubenswrapper[4925]: I0202 12:10:32.514792 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm42t" event={"ID":"5923def9-dccb-400f-9d58-92fe290ded60","Type":"ContainerDied","Data":"d3a53a87498aac6de54bb07b2b7e3f53aecb8609357382f80704df4ab3f7d4b2"} Feb 02 12:10:33 crc kubenswrapper[4925]: I0202 12:10:33.527103 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm42t" event={"ID":"5923def9-dccb-400f-9d58-92fe290ded60","Type":"ContainerStarted","Data":"aa70287b1df3fb6f6c60a596fef10d07d6d402f630417a7af21f05079b143a63"} Feb 02 12:10:33 crc kubenswrapper[4925]: I0202 12:10:33.557666 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zm42t" podStartSLOduration=3.033422056 podStartE2EDuration="8.557643776s" podCreationTimestamp="2026-02-02 12:10:25 +0000 UTC" firstStartedPulling="2026-02-02 12:10:27.465578018 +0000 UTC m=+4404.469826980" lastFinishedPulling="2026-02-02 12:10:32.989799738 +0000 UTC m=+4409.994048700" observedRunningTime="2026-02-02 12:10:33.54858806 +0000 UTC m=+4410.552837022" watchObservedRunningTime="2026-02-02 12:10:33.557643776 +0000 UTC m=+4410.561892738" Feb 02 12:10:36 crc kubenswrapper[4925]: I0202 12:10:36.507533 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zm42t" Feb 02 12:10:36 crc kubenswrapper[4925]: I0202 12:10:36.508147 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zm42t" Feb 02 12:10:37 crc kubenswrapper[4925]: I0202 12:10:37.886328 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zm42t" podUID="5923def9-dccb-400f-9d58-92fe290ded60" containerName="registry-server" probeResult="failure" output=< Feb 02 12:10:37 crc kubenswrapper[4925]: timeout: failed to connect service ":50051" within 1s Feb 02 12:10:37 crc kubenswrapper[4925]: > Feb 02 12:10:40 crc kubenswrapper[4925]: I0202 12:10:40.664845 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:10:40 crc kubenswrapper[4925]: E0202 12:10:40.665685 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:10:46 crc kubenswrapper[4925]: I0202 12:10:46.554306 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zm42t" Feb 02 12:10:46 crc kubenswrapper[4925]: I0202 12:10:46.614853 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zm42t" Feb 02 12:10:46 crc kubenswrapper[4925]: I0202 12:10:46.834641 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zm42t"] Feb 02 12:10:47 crc kubenswrapper[4925]: I0202 12:10:47.641346 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zm42t" podUID="5923def9-dccb-400f-9d58-92fe290ded60" containerName="registry-server" containerID="cri-o://aa70287b1df3fb6f6c60a596fef10d07d6d402f630417a7af21f05079b143a63" gracePeriod=2 Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.182985 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zm42t" Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.237046 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djq26\" (UniqueName: \"kubernetes.io/projected/5923def9-dccb-400f-9d58-92fe290ded60-kube-api-access-djq26\") pod \"5923def9-dccb-400f-9d58-92fe290ded60\" (UID: \"5923def9-dccb-400f-9d58-92fe290ded60\") " Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.237121 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5923def9-dccb-400f-9d58-92fe290ded60-catalog-content\") pod \"5923def9-dccb-400f-9d58-92fe290ded60\" (UID: \"5923def9-dccb-400f-9d58-92fe290ded60\") " Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.237286 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5923def9-dccb-400f-9d58-92fe290ded60-utilities\") pod \"5923def9-dccb-400f-9d58-92fe290ded60\" (UID: \"5923def9-dccb-400f-9d58-92fe290ded60\") " Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.238734 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5923def9-dccb-400f-9d58-92fe290ded60-utilities" (OuterVolumeSpecName: "utilities") pod "5923def9-dccb-400f-9d58-92fe290ded60" (UID: "5923def9-dccb-400f-9d58-92fe290ded60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.255291 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5923def9-dccb-400f-9d58-92fe290ded60-kube-api-access-djq26" (OuterVolumeSpecName: "kube-api-access-djq26") pod "5923def9-dccb-400f-9d58-92fe290ded60" (UID: "5923def9-dccb-400f-9d58-92fe290ded60"). InnerVolumeSpecName "kube-api-access-djq26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.339851 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djq26\" (UniqueName: \"kubernetes.io/projected/5923def9-dccb-400f-9d58-92fe290ded60-kube-api-access-djq26\") on node \"crc\" DevicePath \"\"" Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.339897 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5923def9-dccb-400f-9d58-92fe290ded60-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.379339 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5923def9-dccb-400f-9d58-92fe290ded60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5923def9-dccb-400f-9d58-92fe290ded60" (UID: "5923def9-dccb-400f-9d58-92fe290ded60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.441807 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5923def9-dccb-400f-9d58-92fe290ded60-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.649542 4925 generic.go:334] "Generic (PLEG): container finished" podID="5923def9-dccb-400f-9d58-92fe290ded60" containerID="aa70287b1df3fb6f6c60a596fef10d07d6d402f630417a7af21f05079b143a63" exitCode=0 Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.649579 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm42t" event={"ID":"5923def9-dccb-400f-9d58-92fe290ded60","Type":"ContainerDied","Data":"aa70287b1df3fb6f6c60a596fef10d07d6d402f630417a7af21f05079b143a63"} Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.649622 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm42t" event={"ID":"5923def9-dccb-400f-9d58-92fe290ded60","Type":"ContainerDied","Data":"18c8b7499cebd53e05f269483ee05cc81639b98901fb34055c539feeb191af62"} Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.649640 4925 scope.go:117] "RemoveContainer" containerID="aa70287b1df3fb6f6c60a596fef10d07d6d402f630417a7af21f05079b143a63" Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.649742 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zm42t" Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.715732 4925 scope.go:117] "RemoveContainer" containerID="d3a53a87498aac6de54bb07b2b7e3f53aecb8609357382f80704df4ab3f7d4b2" Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.716811 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zm42t"] Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.725934 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zm42t"] Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.739293 4925 scope.go:117] "RemoveContainer" containerID="facb0c0b9703ab5bbf9c644f4a4b4c4f4348aa64816067eac8c94b38b15cefbc" Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.778744 4925 scope.go:117] "RemoveContainer" containerID="aa70287b1df3fb6f6c60a596fef10d07d6d402f630417a7af21f05079b143a63" Feb 02 12:10:48 crc kubenswrapper[4925]: E0202 12:10:48.779258 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa70287b1df3fb6f6c60a596fef10d07d6d402f630417a7af21f05079b143a63\": container with ID starting with aa70287b1df3fb6f6c60a596fef10d07d6d402f630417a7af21f05079b143a63 not found: ID does not exist" containerID="aa70287b1df3fb6f6c60a596fef10d07d6d402f630417a7af21f05079b143a63" Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.779310 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa70287b1df3fb6f6c60a596fef10d07d6d402f630417a7af21f05079b143a63"} err="failed to get container status \"aa70287b1df3fb6f6c60a596fef10d07d6d402f630417a7af21f05079b143a63\": rpc error: code = NotFound desc = could not find container \"aa70287b1df3fb6f6c60a596fef10d07d6d402f630417a7af21f05079b143a63\": container with ID starting with aa70287b1df3fb6f6c60a596fef10d07d6d402f630417a7af21f05079b143a63 not found: ID does not exist" Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.779338 4925 scope.go:117] "RemoveContainer" containerID="d3a53a87498aac6de54bb07b2b7e3f53aecb8609357382f80704df4ab3f7d4b2" Feb 02 12:10:48 crc kubenswrapper[4925]: E0202 12:10:48.779605 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3a53a87498aac6de54bb07b2b7e3f53aecb8609357382f80704df4ab3f7d4b2\": container with ID starting with d3a53a87498aac6de54bb07b2b7e3f53aecb8609357382f80704df4ab3f7d4b2 not found: ID does not exist" containerID="d3a53a87498aac6de54bb07b2b7e3f53aecb8609357382f80704df4ab3f7d4b2" Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.779637 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a53a87498aac6de54bb07b2b7e3f53aecb8609357382f80704df4ab3f7d4b2"} err="failed to get container status \"d3a53a87498aac6de54bb07b2b7e3f53aecb8609357382f80704df4ab3f7d4b2\": rpc error: code = NotFound desc = could not find container \"d3a53a87498aac6de54bb07b2b7e3f53aecb8609357382f80704df4ab3f7d4b2\": container with ID starting with d3a53a87498aac6de54bb07b2b7e3f53aecb8609357382f80704df4ab3f7d4b2 not found: ID does not exist" Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.779650 4925 scope.go:117] "RemoveContainer" containerID="facb0c0b9703ab5bbf9c644f4a4b4c4f4348aa64816067eac8c94b38b15cefbc" Feb 02 12:10:48 crc kubenswrapper[4925]: E0202 12:10:48.779956 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"facb0c0b9703ab5bbf9c644f4a4b4c4f4348aa64816067eac8c94b38b15cefbc\": container with ID starting with facb0c0b9703ab5bbf9c644f4a4b4c4f4348aa64816067eac8c94b38b15cefbc not found: ID does not exist" containerID="facb0c0b9703ab5bbf9c644f4a4b4c4f4348aa64816067eac8c94b38b15cefbc" Feb 02 12:10:48 crc kubenswrapper[4925]: I0202 12:10:48.779976 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"facb0c0b9703ab5bbf9c644f4a4b4c4f4348aa64816067eac8c94b38b15cefbc"} err="failed to get container status \"facb0c0b9703ab5bbf9c644f4a4b4c4f4348aa64816067eac8c94b38b15cefbc\": rpc error: code = NotFound desc = could not find container \"facb0c0b9703ab5bbf9c644f4a4b4c4f4348aa64816067eac8c94b38b15cefbc\": container with ID starting with facb0c0b9703ab5bbf9c644f4a4b4c4f4348aa64816067eac8c94b38b15cefbc not found: ID does not exist" Feb 02 12:10:50 crc kubenswrapper[4925]: I0202 12:10:50.672604 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5923def9-dccb-400f-9d58-92fe290ded60" path="/var/lib/kubelet/pods/5923def9-dccb-400f-9d58-92fe290ded60/volumes" Feb 02 12:10:54 crc kubenswrapper[4925]: I0202 12:10:54.674258 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:10:54 crc kubenswrapper[4925]: E0202 12:10:54.674842 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:11:07 crc kubenswrapper[4925]: I0202 12:11:07.663933 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:11:07 crc kubenswrapper[4925]: E0202 12:11:07.664857 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:11:19 crc kubenswrapper[4925]: I0202 12:11:19.665187 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:11:19 crc kubenswrapper[4925]: E0202 12:11:19.666481 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:11:32 crc kubenswrapper[4925]: I0202 12:11:32.664790 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:11:32 crc kubenswrapper[4925]: E0202 12:11:32.667773 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:11:46 crc kubenswrapper[4925]: I0202 12:11:46.665922 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:11:46 crc kubenswrapper[4925]: E0202 12:11:46.667912 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:11:57 crc kubenswrapper[4925]: I0202 12:11:57.664813 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:11:57 crc kubenswrapper[4925]: E0202 12:11:57.665780 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:12:11 crc kubenswrapper[4925]: I0202 12:12:11.664096 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:12:11 crc kubenswrapper[4925]: E0202 12:12:11.664834 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:12:24 crc kubenswrapper[4925]: I0202 12:12:24.669971 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:12:24 crc kubenswrapper[4925]: E0202 12:12:24.670783 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:12:38 crc kubenswrapper[4925]: I0202 12:12:38.664314 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:12:38 crc kubenswrapper[4925]: E0202 12:12:38.665097 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:12:51 crc kubenswrapper[4925]: I0202 12:12:51.664481 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:12:51 crc kubenswrapper[4925]: E0202 12:12:51.665285 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:13:04 crc kubenswrapper[4925]: I0202 12:13:04.670675 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:13:04 crc kubenswrapper[4925]: E0202 12:13:04.671347 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:13:15 crc kubenswrapper[4925]: I0202 12:13:15.665174 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:13:15 crc kubenswrapper[4925]: E0202 12:13:15.665912 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:13:16 crc kubenswrapper[4925]: I0202 12:13:16.446597 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mgzr5"] Feb 02 12:13:16 crc kubenswrapper[4925]: E0202 12:13:16.447297 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5923def9-dccb-400f-9d58-92fe290ded60" containerName="extract-utilities" Feb 02 12:13:16 crc kubenswrapper[4925]: I0202 12:13:16.447330 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="5923def9-dccb-400f-9d58-92fe290ded60" containerName="extract-utilities" Feb 02 12:13:16 crc kubenswrapper[4925]: E0202 12:13:16.447362 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5923def9-dccb-400f-9d58-92fe290ded60" containerName="extract-content" Feb 02 12:13:16 crc kubenswrapper[4925]: I0202 12:13:16.447373 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="5923def9-dccb-400f-9d58-92fe290ded60" containerName="extract-content" Feb 02 12:13:16 crc kubenswrapper[4925]: E0202 12:13:16.447403 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5923def9-dccb-400f-9d58-92fe290ded60" containerName="registry-server" Feb 02 12:13:16 crc kubenswrapper[4925]: I0202 12:13:16.447410 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="5923def9-dccb-400f-9d58-92fe290ded60" containerName="registry-server" Feb 02 12:13:16 crc kubenswrapper[4925]: I0202 12:13:16.447662 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="5923def9-dccb-400f-9d58-92fe290ded60" containerName="registry-server" Feb 02 12:13:16 crc kubenswrapper[4925]: I0202 12:13:16.449813 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgzr5" Feb 02 12:13:16 crc kubenswrapper[4925]: I0202 12:13:16.464750 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mgzr5"] Feb 02 12:13:16 crc kubenswrapper[4925]: I0202 12:13:16.542656 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm49s\" (UniqueName: \"kubernetes.io/projected/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-kube-api-access-xm49s\") pod \"certified-operators-mgzr5\" (UID: \"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1\") " pod="openshift-marketplace/certified-operators-mgzr5" Feb 02 12:13:16 crc kubenswrapper[4925]: I0202 12:13:16.542730 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-catalog-content\") pod \"certified-operators-mgzr5\" (UID: \"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1\") " pod="openshift-marketplace/certified-operators-mgzr5" Feb 02 12:13:16 crc kubenswrapper[4925]: I0202 12:13:16.542903 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-utilities\") pod \"certified-operators-mgzr5\" (UID: \"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1\") " pod="openshift-marketplace/certified-operators-mgzr5" Feb 02 12:13:16 crc kubenswrapper[4925]: I0202 12:13:16.644980 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm49s\" (UniqueName: \"kubernetes.io/projected/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-kube-api-access-xm49s\") pod \"certified-operators-mgzr5\" (UID: \"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1\") " pod="openshift-marketplace/certified-operators-mgzr5" Feb 02 12:13:16 crc kubenswrapper[4925]: I0202 12:13:16.645039 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-catalog-content\") pod \"certified-operators-mgzr5\" (UID: \"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1\") " pod="openshift-marketplace/certified-operators-mgzr5" Feb 02 12:13:16 crc kubenswrapper[4925]: I0202 12:13:16.645152 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-utilities\") pod \"certified-operators-mgzr5\" (UID: \"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1\") " pod="openshift-marketplace/certified-operators-mgzr5" Feb 02 12:13:16 crc kubenswrapper[4925]: I0202 12:13:16.645589 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-utilities\") pod \"certified-operators-mgzr5\" (UID: \"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1\") " pod="openshift-marketplace/certified-operators-mgzr5" Feb 02 12:13:16 crc kubenswrapper[4925]: I0202 12:13:16.645700 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-catalog-content\") pod \"certified-operators-mgzr5\" (UID: \"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1\") " pod="openshift-marketplace/certified-operators-mgzr5" Feb 02 12:13:16 crc kubenswrapper[4925]: I0202 12:13:16.663049 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm49s\" (UniqueName: \"kubernetes.io/projected/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-kube-api-access-xm49s\") pod \"certified-operators-mgzr5\" (UID: \"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1\") " pod="openshift-marketplace/certified-operators-mgzr5" Feb 02 12:13:16 crc kubenswrapper[4925]: I0202 12:13:16.770774 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgzr5" Feb 02 12:13:17 crc kubenswrapper[4925]: I0202 12:13:17.300454 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mgzr5"] Feb 02 12:13:17 crc kubenswrapper[4925]: I0202 12:13:17.973771 4925 generic.go:334] "Generic (PLEG): container finished" podID="eb31cef4-fdef-4aba-82ba-f3f9a87e59b1" containerID="bc3247586689a37b12172ed3ae88d7ba9e60ee77426bdc4ea0edd90b1c69fad5" exitCode=0 Feb 02 12:13:17 crc kubenswrapper[4925]: I0202 12:13:17.973910 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgzr5" event={"ID":"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1","Type":"ContainerDied","Data":"bc3247586689a37b12172ed3ae88d7ba9e60ee77426bdc4ea0edd90b1c69fad5"} Feb 02 12:13:17 crc kubenswrapper[4925]: I0202 12:13:17.974273 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgzr5" event={"ID":"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1","Type":"ContainerStarted","Data":"834b2413fa331a05b5f7ed88da1ef38ab0512abe5b2a95acf02085c825ceaab8"} Feb 02 12:13:18 crc kubenswrapper[4925]: I0202 12:13:18.983857 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgzr5" event={"ID":"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1","Type":"ContainerStarted","Data":"b559ab957217a3270d5b374c440a23725a0aa01f27a904bf9700eaecc6f4bad3"} Feb 02 12:13:20 crc kubenswrapper[4925]: I0202 12:13:20.004694 4925 generic.go:334] "Generic (PLEG): container finished" podID="eb31cef4-fdef-4aba-82ba-f3f9a87e59b1" containerID="b559ab957217a3270d5b374c440a23725a0aa01f27a904bf9700eaecc6f4bad3" exitCode=0 Feb 02 12:13:20 crc kubenswrapper[4925]: I0202 12:13:20.004985 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgzr5" event={"ID":"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1","Type":"ContainerDied","Data":"b559ab957217a3270d5b374c440a23725a0aa01f27a904bf9700eaecc6f4bad3"} Feb 02 12:13:21 crc kubenswrapper[4925]: I0202 12:13:21.022579 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgzr5" event={"ID":"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1","Type":"ContainerStarted","Data":"bc6907e06a5966bacb421db3afa7dc4e8a325d0a03dcb7cb8988d9d3b8bbb912"} Feb 02 12:13:21 crc kubenswrapper[4925]: I0202 12:13:21.041266 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mgzr5" podStartSLOduration=2.6146048520000003 podStartE2EDuration="5.041236674s" podCreationTimestamp="2026-02-02 12:13:16 +0000 UTC" firstStartedPulling="2026-02-02 12:13:17.975647424 +0000 UTC m=+4574.979896386" lastFinishedPulling="2026-02-02 12:13:20.402279246 +0000 UTC m=+4577.406528208" observedRunningTime="2026-02-02 12:13:21.037838671 +0000 UTC m=+4578.042087653" watchObservedRunningTime="2026-02-02 12:13:21.041236674 +0000 UTC m=+4578.045485636" Feb 02 12:13:26 crc kubenswrapper[4925]: I0202 12:13:26.771724 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mgzr5" Feb 02 12:13:26 crc kubenswrapper[4925]: I0202 12:13:26.772231 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mgzr5" Feb 02 12:13:27 crc kubenswrapper[4925]: I0202 12:13:27.489698 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mgzr5" Feb 02 12:13:27 crc kubenswrapper[4925]: I0202 12:13:27.540793 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mgzr5" Feb 02 12:13:27 crc kubenswrapper[4925]: I0202 12:13:27.726614 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mgzr5"] Feb 02 12:13:29 crc kubenswrapper[4925]: I0202 12:13:29.113984 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mgzr5" podUID="eb31cef4-fdef-4aba-82ba-f3f9a87e59b1" containerName="registry-server" containerID="cri-o://bc6907e06a5966bacb421db3afa7dc4e8a325d0a03dcb7cb8988d9d3b8bbb912" gracePeriod=2 Feb 02 12:13:29 crc kubenswrapper[4925]: I0202 12:13:29.664636 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:13:29 crc kubenswrapper[4925]: E0202 12:13:29.665496 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:13:30 crc kubenswrapper[4925]: I0202 12:13:30.134410 4925 generic.go:334] "Generic (PLEG): container finished" podID="eb31cef4-fdef-4aba-82ba-f3f9a87e59b1" containerID="bc6907e06a5966bacb421db3afa7dc4e8a325d0a03dcb7cb8988d9d3b8bbb912" exitCode=0 Feb 02 12:13:30 crc kubenswrapper[4925]: I0202 12:13:30.134510 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgzr5" event={"ID":"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1","Type":"ContainerDied","Data":"bc6907e06a5966bacb421db3afa7dc4e8a325d0a03dcb7cb8988d9d3b8bbb912"} Feb 02 12:13:30 crc kubenswrapper[4925]: I0202 12:13:30.229721 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgzr5" Feb 02 12:13:30 crc kubenswrapper[4925]: I0202 12:13:30.328691 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm49s\" (UniqueName: \"kubernetes.io/projected/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-kube-api-access-xm49s\") pod \"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1\" (UID: \"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1\") " Feb 02 12:13:30 crc kubenswrapper[4925]: I0202 12:13:30.328799 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-utilities\") pod \"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1\" (UID: \"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1\") " Feb 02 12:13:30 crc kubenswrapper[4925]: I0202 12:13:30.328869 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-catalog-content\") pod \"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1\" (UID: \"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1\") " Feb 02 12:13:30 crc kubenswrapper[4925]: I0202 12:13:30.330321 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-utilities" (OuterVolumeSpecName: "utilities") pod "eb31cef4-fdef-4aba-82ba-f3f9a87e59b1" (UID: "eb31cef4-fdef-4aba-82ba-f3f9a87e59b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:13:30 crc kubenswrapper[4925]: I0202 12:13:30.335162 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-kube-api-access-xm49s" (OuterVolumeSpecName: "kube-api-access-xm49s") pod "eb31cef4-fdef-4aba-82ba-f3f9a87e59b1" (UID: "eb31cef4-fdef-4aba-82ba-f3f9a87e59b1"). InnerVolumeSpecName "kube-api-access-xm49s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:13:30 crc kubenswrapper[4925]: I0202 12:13:30.384274 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb31cef4-fdef-4aba-82ba-f3f9a87e59b1" (UID: "eb31cef4-fdef-4aba-82ba-f3f9a87e59b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:13:30 crc kubenswrapper[4925]: I0202 12:13:30.431004 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 12:13:30 crc kubenswrapper[4925]: I0202 12:13:30.431042 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 12:13:30 crc kubenswrapper[4925]: I0202 12:13:30.431052 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm49s\" (UniqueName: \"kubernetes.io/projected/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1-kube-api-access-xm49s\") on node \"crc\" DevicePath \"\"" Feb 02 12:13:31 crc kubenswrapper[4925]: I0202 12:13:31.144637 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgzr5" event={"ID":"eb31cef4-fdef-4aba-82ba-f3f9a87e59b1","Type":"ContainerDied","Data":"834b2413fa331a05b5f7ed88da1ef38ab0512abe5b2a95acf02085c825ceaab8"} Feb 02 12:13:31 crc kubenswrapper[4925]: I0202 12:13:31.144814 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgzr5" Feb 02 12:13:31 crc kubenswrapper[4925]: I0202 12:13:31.145033 4925 scope.go:117] "RemoveContainer" containerID="bc6907e06a5966bacb421db3afa7dc4e8a325d0a03dcb7cb8988d9d3b8bbb912" Feb 02 12:13:31 crc kubenswrapper[4925]: I0202 12:13:31.166744 4925 scope.go:117] "RemoveContainer" containerID="b559ab957217a3270d5b374c440a23725a0aa01f27a904bf9700eaecc6f4bad3" Feb 02 12:13:31 crc kubenswrapper[4925]: I0202 12:13:31.169595 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mgzr5"] Feb 02 12:13:31 crc kubenswrapper[4925]: I0202 12:13:31.180670 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mgzr5"] Feb 02 12:13:31 crc kubenswrapper[4925]: I0202 12:13:31.200722 4925 scope.go:117] "RemoveContainer" containerID="bc3247586689a37b12172ed3ae88d7ba9e60ee77426bdc4ea0edd90b1c69fad5" Feb 02 12:13:32 crc kubenswrapper[4925]: I0202 12:13:32.697397 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb31cef4-fdef-4aba-82ba-f3f9a87e59b1" path="/var/lib/kubelet/pods/eb31cef4-fdef-4aba-82ba-f3f9a87e59b1/volumes" Feb 02 12:13:42 crc kubenswrapper[4925]: I0202 12:13:42.665466 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:13:42 crc kubenswrapper[4925]: E0202 12:13:42.666315 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:13:49 crc kubenswrapper[4925]: I0202 12:13:49.298669 4925 generic.go:334] "Generic (PLEG): container finished" podID="7390b503-a9bf-41e3-9506-1f63b8ad6d7d" containerID="41a7a6c19be0ec5ec82dcdcc881aeff0404c9a2ea076215a4793f01112923dc7" exitCode=0 Feb 02 12:13:49 crc kubenswrapper[4925]: I0202 12:13:49.298758 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"7390b503-a9bf-41e3-9506-1f63b8ad6d7d","Type":"ContainerDied","Data":"41a7a6c19be0ec5ec82dcdcc881aeff0404c9a2ea076215a4793f01112923dc7"} Feb 02 12:13:50 crc kubenswrapper[4925]: I0202 12:13:50.949936 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.059894 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-test-operator-ephemeral-temporary\") pod \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.059982 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.060032 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-config-data\") pod \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.060065 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-openstack-config\") pod \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.060096 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-test-operator-ephemeral-workdir\") pod \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.060117 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-ssh-key\") pod \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.060148 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-openstack-config-secret\") pod \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.060235 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll8f5\" (UniqueName: \"kubernetes.io/projected/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-kube-api-access-ll8f5\") pod \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.060255 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-ca-certs\") pod \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\" (UID: \"7390b503-a9bf-41e3-9506-1f63b8ad6d7d\") " Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.067034 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "7390b503-a9bf-41e3-9506-1f63b8ad6d7d" (UID: "7390b503-a9bf-41e3-9506-1f63b8ad6d7d"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.067298 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-config-data" (OuterVolumeSpecName: "config-data") pod "7390b503-a9bf-41e3-9506-1f63b8ad6d7d" (UID: "7390b503-a9bf-41e3-9506-1f63b8ad6d7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.070327 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-kube-api-access-ll8f5" (OuterVolumeSpecName: "kube-api-access-ll8f5") pod "7390b503-a9bf-41e3-9506-1f63b8ad6d7d" (UID: "7390b503-a9bf-41e3-9506-1f63b8ad6d7d"). InnerVolumeSpecName "kube-api-access-ll8f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.076272 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "7390b503-a9bf-41e3-9506-1f63b8ad6d7d" (UID: "7390b503-a9bf-41e3-9506-1f63b8ad6d7d"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.082338 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "7390b503-a9bf-41e3-9506-1f63b8ad6d7d" (UID: "7390b503-a9bf-41e3-9506-1f63b8ad6d7d"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.094405 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "7390b503-a9bf-41e3-9506-1f63b8ad6d7d" (UID: "7390b503-a9bf-41e3-9506-1f63b8ad6d7d"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.095178 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "7390b503-a9bf-41e3-9506-1f63b8ad6d7d" (UID: "7390b503-a9bf-41e3-9506-1f63b8ad6d7d"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.107226 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7390b503-a9bf-41e3-9506-1f63b8ad6d7d" (UID: "7390b503-a9bf-41e3-9506-1f63b8ad6d7d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.117214 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "7390b503-a9bf-41e3-9506-1f63b8ad6d7d" (UID: "7390b503-a9bf-41e3-9506-1f63b8ad6d7d"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.162149 4925 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.162179 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll8f5\" (UniqueName: \"kubernetes.io/projected/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-kube-api-access-ll8f5\") on node \"crc\" DevicePath \"\"" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.162193 4925 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.162205 4925 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.162240 4925 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.162250 4925 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.162259 4925 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.162268 4925 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.162279 4925 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7390b503-a9bf-41e3-9506-1f63b8ad6d7d-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.187053 4925 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.264361 4925 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.318068 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"7390b503-a9bf-41e3-9506-1f63b8ad6d7d","Type":"ContainerDied","Data":"45f6da8325c0cbb87e6a61d492f881aac25bfbd40ec411b4085fcb4d7d131b09"} Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.318125 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45f6da8325c0cbb87e6a61d492f881aac25bfbd40ec411b4085fcb4d7d131b09" Feb 02 12:13:51 crc kubenswrapper[4925]: I0202 12:13:51.318182 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 12:13:56 crc kubenswrapper[4925]: I0202 12:13:56.665770 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:13:56 crc kubenswrapper[4925]: E0202 12:13:56.667015 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.260149 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 12:14:02 crc kubenswrapper[4925]: E0202 12:14:02.261063 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb31cef4-fdef-4aba-82ba-f3f9a87e59b1" containerName="extract-utilities" Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.261096 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb31cef4-fdef-4aba-82ba-f3f9a87e59b1" containerName="extract-utilities" Feb 02 12:14:02 crc kubenswrapper[4925]: E0202 12:14:02.261111 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7390b503-a9bf-41e3-9506-1f63b8ad6d7d" containerName="tempest-tests-tempest-tests-runner" Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.261118 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="7390b503-a9bf-41e3-9506-1f63b8ad6d7d" containerName="tempest-tests-tempest-tests-runner" Feb 02 12:14:02 crc kubenswrapper[4925]: E0202 12:14:02.261130 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb31cef4-fdef-4aba-82ba-f3f9a87e59b1" containerName="extract-content" Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.261136 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb31cef4-fdef-4aba-82ba-f3f9a87e59b1" containerName="extract-content" Feb 02 12:14:02 crc kubenswrapper[4925]: E0202 12:14:02.261147 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb31cef4-fdef-4aba-82ba-f3f9a87e59b1" containerName="registry-server" Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.261152 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb31cef4-fdef-4aba-82ba-f3f9a87e59b1" containerName="registry-server" Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.261331 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="7390b503-a9bf-41e3-9506-1f63b8ad6d7d" containerName="tempest-tests-tempest-tests-runner" Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.261341 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb31cef4-fdef-4aba-82ba-f3f9a87e59b1" containerName="registry-server" Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.261995 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.264023 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bjkfm" Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.271574 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.422877 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fdd0c34b-7157-4648-ae8e-de13e12bcaed\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.422995 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqnz9\" (UniqueName: \"kubernetes.io/projected/fdd0c34b-7157-4648-ae8e-de13e12bcaed-kube-api-access-wqnz9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fdd0c34b-7157-4648-ae8e-de13e12bcaed\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.524845 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fdd0c34b-7157-4648-ae8e-de13e12bcaed\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.524990 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqnz9\" (UniqueName: \"kubernetes.io/projected/fdd0c34b-7157-4648-ae8e-de13e12bcaed-kube-api-access-wqnz9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fdd0c34b-7157-4648-ae8e-de13e12bcaed\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.525487 4925 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fdd0c34b-7157-4648-ae8e-de13e12bcaed\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.545422 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqnz9\" (UniqueName: \"kubernetes.io/projected/fdd0c34b-7157-4648-ae8e-de13e12bcaed-kube-api-access-wqnz9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fdd0c34b-7157-4648-ae8e-de13e12bcaed\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.569531 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fdd0c34b-7157-4648-ae8e-de13e12bcaed\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 12:14:02 crc kubenswrapper[4925]: I0202 12:14:02.642763 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 12:14:03 crc kubenswrapper[4925]: I0202 12:14:03.150043 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 12:14:03 crc kubenswrapper[4925]: I0202 12:14:03.415977 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fdd0c34b-7157-4648-ae8e-de13e12bcaed","Type":"ContainerStarted","Data":"41280e67522566592b95c974906eeabd18762b2e69736740044151164c14ee9e"} Feb 02 12:14:04 crc kubenswrapper[4925]: I0202 12:14:04.429839 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fdd0c34b-7157-4648-ae8e-de13e12bcaed","Type":"ContainerStarted","Data":"eed267b0be2bd0fee14a1504d4e74a1255c8ee1dc15f6b5c9f4ad7deccd1c7bc"} Feb 02 12:14:09 crc kubenswrapper[4925]: I0202 12:14:09.665592 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:14:09 crc kubenswrapper[4925]: E0202 12:14:09.666486 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:14:24 crc kubenswrapper[4925]: I0202 12:14:24.678576 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:14:24 crc kubenswrapper[4925]: E0202 12:14:24.679304 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:14:28 crc kubenswrapper[4925]: I0202 12:14:28.062629 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=25.003499714 podStartE2EDuration="26.062611695s" podCreationTimestamp="2026-02-02 12:14:02 +0000 UTC" firstStartedPulling="2026-02-02 12:14:03.131199942 +0000 UTC m=+4620.135448904" lastFinishedPulling="2026-02-02 12:14:04.190311923 +0000 UTC m=+4621.194560885" observedRunningTime="2026-02-02 12:14:04.452414663 +0000 UTC m=+4621.456663625" watchObservedRunningTime="2026-02-02 12:14:28.062611695 +0000 UTC m=+4645.066860657" Feb 02 12:14:28 crc kubenswrapper[4925]: I0202 12:14:28.071574 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-s7vgc/must-gather-6dbzg"] Feb 02 12:14:28 crc kubenswrapper[4925]: I0202 12:14:28.073560 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-s7vgc/must-gather-6dbzg" Feb 02 12:14:28 crc kubenswrapper[4925]: I0202 12:14:28.077277 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-s7vgc"/"kube-root-ca.crt" Feb 02 12:14:28 crc kubenswrapper[4925]: I0202 12:14:28.077544 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-s7vgc"/"openshift-service-ca.crt" Feb 02 12:14:28 crc kubenswrapper[4925]: I0202 12:14:28.077666 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-s7vgc"/"default-dockercfg-gzwkc" Feb 02 12:14:28 crc kubenswrapper[4925]: I0202 12:14:28.085387 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-s7vgc/must-gather-6dbzg"] Feb 02 12:14:28 crc kubenswrapper[4925]: I0202 12:14:28.147419 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fftkh\" (UniqueName: \"kubernetes.io/projected/9ed232c0-fe5f-4069-9cf4-adfe339d2da4-kube-api-access-fftkh\") pod \"must-gather-6dbzg\" (UID: \"9ed232c0-fe5f-4069-9cf4-adfe339d2da4\") " pod="openshift-must-gather-s7vgc/must-gather-6dbzg" Feb 02 12:14:28 crc kubenswrapper[4925]: I0202 12:14:28.147516 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9ed232c0-fe5f-4069-9cf4-adfe339d2da4-must-gather-output\") pod \"must-gather-6dbzg\" (UID: \"9ed232c0-fe5f-4069-9cf4-adfe339d2da4\") " pod="openshift-must-gather-s7vgc/must-gather-6dbzg" Feb 02 12:14:28 crc kubenswrapper[4925]: I0202 12:14:28.249424 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fftkh\" (UniqueName: \"kubernetes.io/projected/9ed232c0-fe5f-4069-9cf4-adfe339d2da4-kube-api-access-fftkh\") pod \"must-gather-6dbzg\" (UID: \"9ed232c0-fe5f-4069-9cf4-adfe339d2da4\") " pod="openshift-must-gather-s7vgc/must-gather-6dbzg" Feb 02 12:14:28 crc kubenswrapper[4925]: I0202 12:14:28.249482 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9ed232c0-fe5f-4069-9cf4-adfe339d2da4-must-gather-output\") pod \"must-gather-6dbzg\" (UID: \"9ed232c0-fe5f-4069-9cf4-adfe339d2da4\") " pod="openshift-must-gather-s7vgc/must-gather-6dbzg" Feb 02 12:14:28 crc kubenswrapper[4925]: I0202 12:14:28.249905 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9ed232c0-fe5f-4069-9cf4-adfe339d2da4-must-gather-output\") pod \"must-gather-6dbzg\" (UID: \"9ed232c0-fe5f-4069-9cf4-adfe339d2da4\") " pod="openshift-must-gather-s7vgc/must-gather-6dbzg" Feb 02 12:14:28 crc kubenswrapper[4925]: I0202 12:14:28.283351 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fftkh\" (UniqueName: \"kubernetes.io/projected/9ed232c0-fe5f-4069-9cf4-adfe339d2da4-kube-api-access-fftkh\") pod \"must-gather-6dbzg\" (UID: \"9ed232c0-fe5f-4069-9cf4-adfe339d2da4\") " pod="openshift-must-gather-s7vgc/must-gather-6dbzg" Feb 02 12:14:28 crc kubenswrapper[4925]: I0202 12:14:28.403100 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-s7vgc/must-gather-6dbzg" Feb 02 12:14:28 crc kubenswrapper[4925]: I0202 12:14:28.847916 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-s7vgc/must-gather-6dbzg"] Feb 02 12:14:29 crc kubenswrapper[4925]: I0202 12:14:29.663376 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-s7vgc/must-gather-6dbzg" event={"ID":"9ed232c0-fe5f-4069-9cf4-adfe339d2da4","Type":"ContainerStarted","Data":"649cfb7ac4618042827045af327d04a8588c3b317ccd65c8b5f9e4db600ffba7"} Feb 02 12:14:35 crc kubenswrapper[4925]: I0202 12:14:35.664127 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:14:35 crc kubenswrapper[4925]: E0202 12:14:35.664984 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:14:35 crc kubenswrapper[4925]: I0202 12:14:35.712761 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-s7vgc/must-gather-6dbzg" event={"ID":"9ed232c0-fe5f-4069-9cf4-adfe339d2da4","Type":"ContainerStarted","Data":"70f6fd2558319e5792588bbfbaae0f8240561bbfc2927840610703e4b6c8882a"} Feb 02 12:14:35 crc kubenswrapper[4925]: I0202 12:14:35.712818 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-s7vgc/must-gather-6dbzg" event={"ID":"9ed232c0-fe5f-4069-9cf4-adfe339d2da4","Type":"ContainerStarted","Data":"b04e863f8f70627a36ab0158b205fbfdc6fa66c0485d1c65cbb3517c7a29a74c"} Feb 02 12:14:35 crc kubenswrapper[4925]: I0202 12:14:35.728786 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-s7vgc/must-gather-6dbzg" podStartSLOduration=1.801588911 podStartE2EDuration="7.728768012s" podCreationTimestamp="2026-02-02 12:14:28 +0000 UTC" firstStartedPulling="2026-02-02 12:14:28.862944085 +0000 UTC m=+4645.867193047" lastFinishedPulling="2026-02-02 12:14:34.790123186 +0000 UTC m=+4651.794372148" observedRunningTime="2026-02-02 12:14:35.727872128 +0000 UTC m=+4652.732121110" watchObservedRunningTime="2026-02-02 12:14:35.728768012 +0000 UTC m=+4652.733016984" Feb 02 12:14:39 crc kubenswrapper[4925]: I0202 12:14:39.391967 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-s7vgc/crc-debug-klzx9"] Feb 02 12:14:39 crc kubenswrapper[4925]: I0202 12:14:39.393715 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-s7vgc/crc-debug-klzx9" Feb 02 12:14:39 crc kubenswrapper[4925]: I0202 12:14:39.495139 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30ef1017-d237-46b6-b9f2-00ed5c2f39b0-host\") pod \"crc-debug-klzx9\" (UID: \"30ef1017-d237-46b6-b9f2-00ed5c2f39b0\") " pod="openshift-must-gather-s7vgc/crc-debug-klzx9" Feb 02 12:14:39 crc kubenswrapper[4925]: I0202 12:14:39.495798 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6pzc\" (UniqueName: \"kubernetes.io/projected/30ef1017-d237-46b6-b9f2-00ed5c2f39b0-kube-api-access-z6pzc\") pod \"crc-debug-klzx9\" (UID: \"30ef1017-d237-46b6-b9f2-00ed5c2f39b0\") " pod="openshift-must-gather-s7vgc/crc-debug-klzx9" Feb 02 12:14:39 crc kubenswrapper[4925]: I0202 12:14:39.598222 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6pzc\" (UniqueName: \"kubernetes.io/projected/30ef1017-d237-46b6-b9f2-00ed5c2f39b0-kube-api-access-z6pzc\") pod \"crc-debug-klzx9\" (UID: \"30ef1017-d237-46b6-b9f2-00ed5c2f39b0\") " pod="openshift-must-gather-s7vgc/crc-debug-klzx9" Feb 02 12:14:39 crc kubenswrapper[4925]: I0202 12:14:39.598584 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30ef1017-d237-46b6-b9f2-00ed5c2f39b0-host\") pod \"crc-debug-klzx9\" (UID: \"30ef1017-d237-46b6-b9f2-00ed5c2f39b0\") " pod="openshift-must-gather-s7vgc/crc-debug-klzx9" Feb 02 12:14:39 crc kubenswrapper[4925]: I0202 12:14:39.598835 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30ef1017-d237-46b6-b9f2-00ed5c2f39b0-host\") pod \"crc-debug-klzx9\" (UID: \"30ef1017-d237-46b6-b9f2-00ed5c2f39b0\") " pod="openshift-must-gather-s7vgc/crc-debug-klzx9" Feb 02 12:14:39 crc kubenswrapper[4925]: I0202 12:14:39.626614 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6pzc\" (UniqueName: \"kubernetes.io/projected/30ef1017-d237-46b6-b9f2-00ed5c2f39b0-kube-api-access-z6pzc\") pod \"crc-debug-klzx9\" (UID: \"30ef1017-d237-46b6-b9f2-00ed5c2f39b0\") " pod="openshift-must-gather-s7vgc/crc-debug-klzx9" Feb 02 12:14:39 crc kubenswrapper[4925]: I0202 12:14:39.717849 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-s7vgc/crc-debug-klzx9" Feb 02 12:14:40 crc kubenswrapper[4925]: W0202 12:14:40.153443 4925 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30ef1017_d237_46b6_b9f2_00ed5c2f39b0.slice/crio-918e2f3e4d3b25a18b8ddf06ea6e0852ae0dff3681f8f00959bd817e59f4a95e WatchSource:0}: Error finding container 918e2f3e4d3b25a18b8ddf06ea6e0852ae0dff3681f8f00959bd817e59f4a95e: Status 404 returned error can't find the container with id 918e2f3e4d3b25a18b8ddf06ea6e0852ae0dff3681f8f00959bd817e59f4a95e Feb 02 12:14:40 crc kubenswrapper[4925]: I0202 12:14:40.804004 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-s7vgc/crc-debug-klzx9" event={"ID":"30ef1017-d237-46b6-b9f2-00ed5c2f39b0","Type":"ContainerStarted","Data":"918e2f3e4d3b25a18b8ddf06ea6e0852ae0dff3681f8f00959bd817e59f4a95e"} Feb 02 12:14:50 crc kubenswrapper[4925]: I0202 12:14:50.665124 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:14:50 crc kubenswrapper[4925]: E0202 12:14:50.666150 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:14:51 crc kubenswrapper[4925]: I0202 12:14:51.897677 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-s7vgc/crc-debug-klzx9" event={"ID":"30ef1017-d237-46b6-b9f2-00ed5c2f39b0","Type":"ContainerStarted","Data":"ab2c7bee3c54d3943d65a047d69a1ce19ce65bd4cfa82510c29c2b61029c37a4"} Feb 02 12:14:51 crc kubenswrapper[4925]: I0202 12:14:51.913607 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-s7vgc/crc-debug-klzx9" podStartSLOduration=2.299588928 podStartE2EDuration="12.913591545s" podCreationTimestamp="2026-02-02 12:14:39 +0000 UTC" firstStartedPulling="2026-02-02 12:14:40.156094378 +0000 UTC m=+4657.160343340" lastFinishedPulling="2026-02-02 12:14:50.770096995 +0000 UTC m=+4667.774345957" observedRunningTime="2026-02-02 12:14:51.910288195 +0000 UTC m=+4668.914537167" watchObservedRunningTime="2026-02-02 12:14:51.913591545 +0000 UTC m=+4668.917840507" Feb 02 12:15:00 crc kubenswrapper[4925]: I0202 12:15:00.163491 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56"] Feb 02 12:15:00 crc kubenswrapper[4925]: I0202 12:15:00.165531 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56" Feb 02 12:15:00 crc kubenswrapper[4925]: I0202 12:15:00.169536 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 12:15:00 crc kubenswrapper[4925]: I0202 12:15:00.169839 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 12:15:00 crc kubenswrapper[4925]: I0202 12:15:00.184362 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56"] Feb 02 12:15:00 crc kubenswrapper[4925]: I0202 12:15:00.316778 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwkh9\" (UniqueName: \"kubernetes.io/projected/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-kube-api-access-gwkh9\") pod \"collect-profiles-29500575-7lw56\" (UID: \"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56" Feb 02 12:15:00 crc kubenswrapper[4925]: I0202 12:15:00.317041 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-config-volume\") pod \"collect-profiles-29500575-7lw56\" (UID: \"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56" Feb 02 12:15:00 crc kubenswrapper[4925]: I0202 12:15:00.317168 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-secret-volume\") pod \"collect-profiles-29500575-7lw56\" (UID: \"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56" Feb 02 12:15:00 crc kubenswrapper[4925]: I0202 12:15:00.419442 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-config-volume\") pod \"collect-profiles-29500575-7lw56\" (UID: \"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56" Feb 02 12:15:00 crc kubenswrapper[4925]: I0202 12:15:00.419510 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-secret-volume\") pod \"collect-profiles-29500575-7lw56\" (UID: \"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56" Feb 02 12:15:00 crc kubenswrapper[4925]: I0202 12:15:00.419611 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwkh9\" (UniqueName: \"kubernetes.io/projected/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-kube-api-access-gwkh9\") pod \"collect-profiles-29500575-7lw56\" (UID: \"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56" Feb 02 12:15:00 crc kubenswrapper[4925]: I0202 12:15:00.420651 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-config-volume\") pod \"collect-profiles-29500575-7lw56\" (UID: \"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56" Feb 02 12:15:00 crc kubenswrapper[4925]: I0202 12:15:00.743746 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwkh9\" (UniqueName: \"kubernetes.io/projected/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-kube-api-access-gwkh9\") pod \"collect-profiles-29500575-7lw56\" (UID: \"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56" Feb 02 12:15:00 crc kubenswrapper[4925]: I0202 12:15:00.743888 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-secret-volume\") pod \"collect-profiles-29500575-7lw56\" (UID: \"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56" Feb 02 12:15:00 crc kubenswrapper[4925]: I0202 12:15:00.788303 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56" Feb 02 12:15:02 crc kubenswrapper[4925]: I0202 12:15:02.989110 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56"] Feb 02 12:15:03 crc kubenswrapper[4925]: I0202 12:15:03.015023 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56" event={"ID":"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1","Type":"ContainerStarted","Data":"843c26c43c89f35d9aa5ff2708e6e7a0a23fabb5e7ca39828b613e50a7e12f6e"} Feb 02 12:15:03 crc kubenswrapper[4925]: I0202 12:15:03.665108 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:15:03 crc kubenswrapper[4925]: E0202 12:15:03.665956 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:15:04 crc kubenswrapper[4925]: I0202 12:15:04.038556 4925 generic.go:334] "Generic (PLEG): container finished" podID="5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1" containerID="4341c3e963888a432a3f3004d904f3c9cfc51a14061fcc24bd503ad4bc0ae4de" exitCode=0 Feb 02 12:15:04 crc kubenswrapper[4925]: I0202 12:15:04.038978 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56" event={"ID":"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1","Type":"ContainerDied","Data":"4341c3e963888a432a3f3004d904f3c9cfc51a14061fcc24bd503ad4bc0ae4de"} Feb 02 12:15:05 crc kubenswrapper[4925]: I0202 12:15:05.435759 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56" Feb 02 12:15:05 crc kubenswrapper[4925]: I0202 12:15:05.528672 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-config-volume\") pod \"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1\" (UID: \"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1\") " Feb 02 12:15:05 crc kubenswrapper[4925]: I0202 12:15:05.528854 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-secret-volume\") pod \"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1\" (UID: \"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1\") " Feb 02 12:15:05 crc kubenswrapper[4925]: I0202 12:15:05.529061 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwkh9\" (UniqueName: \"kubernetes.io/projected/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-kube-api-access-gwkh9\") pod \"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1\" (UID: \"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1\") " Feb 02 12:15:05 crc kubenswrapper[4925]: I0202 12:15:05.529093 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-config-volume" (OuterVolumeSpecName: "config-volume") pod "5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1" (UID: "5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 12:15:05 crc kubenswrapper[4925]: I0202 12:15:05.529600 4925 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 12:15:05 crc kubenswrapper[4925]: I0202 12:15:05.538820 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1" (UID: "5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:15:05 crc kubenswrapper[4925]: I0202 12:15:05.541921 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-kube-api-access-gwkh9" (OuterVolumeSpecName: "kube-api-access-gwkh9") pod "5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1" (UID: "5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1"). InnerVolumeSpecName "kube-api-access-gwkh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:15:05 crc kubenswrapper[4925]: I0202 12:15:05.631843 4925 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 12:15:05 crc kubenswrapper[4925]: I0202 12:15:05.631882 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwkh9\" (UniqueName: \"kubernetes.io/projected/5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1-kube-api-access-gwkh9\") on node \"crc\" DevicePath \"\"" Feb 02 12:15:06 crc kubenswrapper[4925]: I0202 12:15:06.054813 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56" event={"ID":"5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1","Type":"ContainerDied","Data":"843c26c43c89f35d9aa5ff2708e6e7a0a23fabb5e7ca39828b613e50a7e12f6e"} Feb 02 12:15:06 crc kubenswrapper[4925]: I0202 12:15:06.055210 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="843c26c43c89f35d9aa5ff2708e6e7a0a23fabb5e7ca39828b613e50a7e12f6e" Feb 02 12:15:06 crc kubenswrapper[4925]: I0202 12:15:06.055029 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500575-7lw56" Feb 02 12:15:06 crc kubenswrapper[4925]: I0202 12:15:06.507322 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf"] Feb 02 12:15:06 crc kubenswrapper[4925]: I0202 12:15:06.516096 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500530-t5nxf"] Feb 02 12:15:06 crc kubenswrapper[4925]: I0202 12:15:06.680043 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9787c20-6741-4ac0-ac09-3a0b09c212f3" path="/var/lib/kubelet/pods/d9787c20-6741-4ac0-ac09-3a0b09c212f3/volumes" Feb 02 12:15:16 crc kubenswrapper[4925]: I0202 12:15:16.664933 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:15:17 crc kubenswrapper[4925]: I0202 12:15:17.152138 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"c5d69f58afc7bdb04bc664b8e1f93d1bef487fdc416cef9285aac62e414c7431"} Feb 02 12:15:31 crc kubenswrapper[4925]: I0202 12:15:31.166052 4925 scope.go:117] "RemoveContainer" containerID="92ce0fd69b1ff68fb2e42f4955f4ded464107f95caae9007ea43dc8edd78cb29" Feb 02 12:15:46 crc kubenswrapper[4925]: I0202 12:15:46.419519 4925 generic.go:334] "Generic (PLEG): container finished" podID="30ef1017-d237-46b6-b9f2-00ed5c2f39b0" containerID="ab2c7bee3c54d3943d65a047d69a1ce19ce65bd4cfa82510c29c2b61029c37a4" exitCode=0 Feb 02 12:15:46 crc kubenswrapper[4925]: I0202 12:15:46.419612 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-s7vgc/crc-debug-klzx9" event={"ID":"30ef1017-d237-46b6-b9f2-00ed5c2f39b0","Type":"ContainerDied","Data":"ab2c7bee3c54d3943d65a047d69a1ce19ce65bd4cfa82510c29c2b61029c37a4"} Feb 02 12:15:47 crc kubenswrapper[4925]: I0202 12:15:47.530838 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-s7vgc/crc-debug-klzx9" Feb 02 12:15:47 crc kubenswrapper[4925]: I0202 12:15:47.569429 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30ef1017-d237-46b6-b9f2-00ed5c2f39b0-host\") pod \"30ef1017-d237-46b6-b9f2-00ed5c2f39b0\" (UID: \"30ef1017-d237-46b6-b9f2-00ed5c2f39b0\") " Feb 02 12:15:47 crc kubenswrapper[4925]: I0202 12:15:47.569882 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30ef1017-d237-46b6-b9f2-00ed5c2f39b0-host" (OuterVolumeSpecName: "host") pod "30ef1017-d237-46b6-b9f2-00ed5c2f39b0" (UID: "30ef1017-d237-46b6-b9f2-00ed5c2f39b0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 12:15:47 crc kubenswrapper[4925]: I0202 12:15:47.572468 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-s7vgc/crc-debug-klzx9"] Feb 02 12:15:47 crc kubenswrapper[4925]: I0202 12:15:47.580113 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-s7vgc/crc-debug-klzx9"] Feb 02 12:15:47 crc kubenswrapper[4925]: I0202 12:15:47.671015 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6pzc\" (UniqueName: \"kubernetes.io/projected/30ef1017-d237-46b6-b9f2-00ed5c2f39b0-kube-api-access-z6pzc\") pod \"30ef1017-d237-46b6-b9f2-00ed5c2f39b0\" (UID: \"30ef1017-d237-46b6-b9f2-00ed5c2f39b0\") " Feb 02 12:15:47 crc kubenswrapper[4925]: I0202 12:15:47.671877 4925 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30ef1017-d237-46b6-b9f2-00ed5c2f39b0-host\") on node \"crc\" DevicePath \"\"" Feb 02 12:15:47 crc kubenswrapper[4925]: I0202 12:15:47.676682 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30ef1017-d237-46b6-b9f2-00ed5c2f39b0-kube-api-access-z6pzc" (OuterVolumeSpecName: "kube-api-access-z6pzc") pod "30ef1017-d237-46b6-b9f2-00ed5c2f39b0" (UID: "30ef1017-d237-46b6-b9f2-00ed5c2f39b0"). InnerVolumeSpecName "kube-api-access-z6pzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:15:47 crc kubenswrapper[4925]: I0202 12:15:47.774400 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6pzc\" (UniqueName: \"kubernetes.io/projected/30ef1017-d237-46b6-b9f2-00ed5c2f39b0-kube-api-access-z6pzc\") on node \"crc\" DevicePath \"\"" Feb 02 12:15:48 crc kubenswrapper[4925]: I0202 12:15:48.437300 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="918e2f3e4d3b25a18b8ddf06ea6e0852ae0dff3681f8f00959bd817e59f4a95e" Feb 02 12:15:48 crc kubenswrapper[4925]: I0202 12:15:48.437390 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-s7vgc/crc-debug-klzx9" Feb 02 12:15:48 crc kubenswrapper[4925]: I0202 12:15:48.674571 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30ef1017-d237-46b6-b9f2-00ed5c2f39b0" path="/var/lib/kubelet/pods/30ef1017-d237-46b6-b9f2-00ed5c2f39b0/volumes" Feb 02 12:15:48 crc kubenswrapper[4925]: I0202 12:15:48.722847 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-s7vgc/crc-debug-7kwp9"] Feb 02 12:15:48 crc kubenswrapper[4925]: E0202 12:15:48.723219 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1" containerName="collect-profiles" Feb 02 12:15:48 crc kubenswrapper[4925]: I0202 12:15:48.723238 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1" containerName="collect-profiles" Feb 02 12:15:48 crc kubenswrapper[4925]: E0202 12:15:48.723257 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ef1017-d237-46b6-b9f2-00ed5c2f39b0" containerName="container-00" Feb 02 12:15:48 crc kubenswrapper[4925]: I0202 12:15:48.723263 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ef1017-d237-46b6-b9f2-00ed5c2f39b0" containerName="container-00" Feb 02 12:15:48 crc kubenswrapper[4925]: I0202 12:15:48.723483 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c70bd83-17fa-4ab7-92bc-e9a6673a3bb1" containerName="collect-profiles" Feb 02 12:15:48 crc kubenswrapper[4925]: I0202 12:15:48.723507 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ef1017-d237-46b6-b9f2-00ed5c2f39b0" containerName="container-00" Feb 02 12:15:48 crc kubenswrapper[4925]: I0202 12:15:48.729225 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-s7vgc/crc-debug-7kwp9" Feb 02 12:15:48 crc kubenswrapper[4925]: I0202 12:15:48.813739 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/139c57d0-06c8-4ab5-9957-0f4ef8f8edd9-host\") pod \"crc-debug-7kwp9\" (UID: \"139c57d0-06c8-4ab5-9957-0f4ef8f8edd9\") " pod="openshift-must-gather-s7vgc/crc-debug-7kwp9" Feb 02 12:15:48 crc kubenswrapper[4925]: I0202 12:15:48.814308 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-846sm\" (UniqueName: \"kubernetes.io/projected/139c57d0-06c8-4ab5-9957-0f4ef8f8edd9-kube-api-access-846sm\") pod \"crc-debug-7kwp9\" (UID: \"139c57d0-06c8-4ab5-9957-0f4ef8f8edd9\") " pod="openshift-must-gather-s7vgc/crc-debug-7kwp9" Feb 02 12:15:48 crc kubenswrapper[4925]: I0202 12:15:48.915786 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-846sm\" (UniqueName: \"kubernetes.io/projected/139c57d0-06c8-4ab5-9957-0f4ef8f8edd9-kube-api-access-846sm\") pod \"crc-debug-7kwp9\" (UID: \"139c57d0-06c8-4ab5-9957-0f4ef8f8edd9\") " pod="openshift-must-gather-s7vgc/crc-debug-7kwp9" Feb 02 12:15:48 crc kubenswrapper[4925]: I0202 12:15:48.916168 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/139c57d0-06c8-4ab5-9957-0f4ef8f8edd9-host\") pod \"crc-debug-7kwp9\" (UID: \"139c57d0-06c8-4ab5-9957-0f4ef8f8edd9\") " pod="openshift-must-gather-s7vgc/crc-debug-7kwp9" Feb 02 12:15:48 crc kubenswrapper[4925]: I0202 12:15:48.916324 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/139c57d0-06c8-4ab5-9957-0f4ef8f8edd9-host\") pod \"crc-debug-7kwp9\" (UID: \"139c57d0-06c8-4ab5-9957-0f4ef8f8edd9\") " pod="openshift-must-gather-s7vgc/crc-debug-7kwp9" Feb 02 12:15:48 crc kubenswrapper[4925]: I0202 12:15:48.945912 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-846sm\" (UniqueName: \"kubernetes.io/projected/139c57d0-06c8-4ab5-9957-0f4ef8f8edd9-kube-api-access-846sm\") pod \"crc-debug-7kwp9\" (UID: \"139c57d0-06c8-4ab5-9957-0f4ef8f8edd9\") " pod="openshift-must-gather-s7vgc/crc-debug-7kwp9" Feb 02 12:15:49 crc kubenswrapper[4925]: I0202 12:15:49.049684 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-s7vgc/crc-debug-7kwp9" Feb 02 12:15:49 crc kubenswrapper[4925]: I0202 12:15:49.445385 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-s7vgc/crc-debug-7kwp9" event={"ID":"139c57d0-06c8-4ab5-9957-0f4ef8f8edd9","Type":"ContainerStarted","Data":"3ddad9514b7ae5ef693358326ef3d58ce11686c047809b93b5160da06d570cad"} Feb 02 12:15:49 crc kubenswrapper[4925]: I0202 12:15:49.445783 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-s7vgc/crc-debug-7kwp9" event={"ID":"139c57d0-06c8-4ab5-9957-0f4ef8f8edd9","Type":"ContainerStarted","Data":"13b4a0d01eef4aac7e37eed2881ba9d32e30f9116018720508d3ffb91947b805"} Feb 02 12:15:49 crc kubenswrapper[4925]: I0202 12:15:49.463612 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-s7vgc/crc-debug-7kwp9" podStartSLOduration=1.463596343 podStartE2EDuration="1.463596343s" podCreationTimestamp="2026-02-02 12:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 12:15:49.45716125 +0000 UTC m=+4726.461410212" watchObservedRunningTime="2026-02-02 12:15:49.463596343 +0000 UTC m=+4726.467845305" Feb 02 12:15:50 crc kubenswrapper[4925]: I0202 12:15:50.454042 4925 generic.go:334] "Generic (PLEG): container finished" podID="139c57d0-06c8-4ab5-9957-0f4ef8f8edd9" containerID="3ddad9514b7ae5ef693358326ef3d58ce11686c047809b93b5160da06d570cad" exitCode=0 Feb 02 12:15:50 crc kubenswrapper[4925]: I0202 12:15:50.454148 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-s7vgc/crc-debug-7kwp9" event={"ID":"139c57d0-06c8-4ab5-9957-0f4ef8f8edd9","Type":"ContainerDied","Data":"3ddad9514b7ae5ef693358326ef3d58ce11686c047809b93b5160da06d570cad"} Feb 02 12:15:52 crc kubenswrapper[4925]: I0202 12:15:52.475164 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-s7vgc/crc-debug-7kwp9" event={"ID":"139c57d0-06c8-4ab5-9957-0f4ef8f8edd9","Type":"ContainerDied","Data":"13b4a0d01eef4aac7e37eed2881ba9d32e30f9116018720508d3ffb91947b805"} Feb 02 12:15:52 crc kubenswrapper[4925]: I0202 12:15:52.475561 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13b4a0d01eef4aac7e37eed2881ba9d32e30f9116018720508d3ffb91947b805" Feb 02 12:15:52 crc kubenswrapper[4925]: I0202 12:15:52.557549 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-s7vgc/crc-debug-7kwp9" Feb 02 12:15:52 crc kubenswrapper[4925]: I0202 12:15:52.608458 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-s7vgc/crc-debug-7kwp9"] Feb 02 12:15:52 crc kubenswrapper[4925]: I0202 12:15:52.616484 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-s7vgc/crc-debug-7kwp9"] Feb 02 12:15:52 crc kubenswrapper[4925]: I0202 12:15:52.681521 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-846sm\" (UniqueName: \"kubernetes.io/projected/139c57d0-06c8-4ab5-9957-0f4ef8f8edd9-kube-api-access-846sm\") pod \"139c57d0-06c8-4ab5-9957-0f4ef8f8edd9\" (UID: \"139c57d0-06c8-4ab5-9957-0f4ef8f8edd9\") " Feb 02 12:15:52 crc kubenswrapper[4925]: I0202 12:15:52.681895 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/139c57d0-06c8-4ab5-9957-0f4ef8f8edd9-host\") pod \"139c57d0-06c8-4ab5-9957-0f4ef8f8edd9\" (UID: \"139c57d0-06c8-4ab5-9957-0f4ef8f8edd9\") " Feb 02 12:15:52 crc kubenswrapper[4925]: I0202 12:15:52.681926 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/139c57d0-06c8-4ab5-9957-0f4ef8f8edd9-host" (OuterVolumeSpecName: "host") pod "139c57d0-06c8-4ab5-9957-0f4ef8f8edd9" (UID: "139c57d0-06c8-4ab5-9957-0f4ef8f8edd9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 12:15:52 crc kubenswrapper[4925]: I0202 12:15:52.682694 4925 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/139c57d0-06c8-4ab5-9957-0f4ef8f8edd9-host\") on node \"crc\" DevicePath \"\"" Feb 02 12:15:52 crc kubenswrapper[4925]: I0202 12:15:52.691257 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139c57d0-06c8-4ab5-9957-0f4ef8f8edd9-kube-api-access-846sm" (OuterVolumeSpecName: "kube-api-access-846sm") pod "139c57d0-06c8-4ab5-9957-0f4ef8f8edd9" (UID: "139c57d0-06c8-4ab5-9957-0f4ef8f8edd9"). InnerVolumeSpecName "kube-api-access-846sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:15:52 crc kubenswrapper[4925]: I0202 12:15:52.786248 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-846sm\" (UniqueName: \"kubernetes.io/projected/139c57d0-06c8-4ab5-9957-0f4ef8f8edd9-kube-api-access-846sm\") on node \"crc\" DevicePath \"\"" Feb 02 12:15:53 crc kubenswrapper[4925]: I0202 12:15:53.485169 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-s7vgc/crc-debug-7kwp9" Feb 02 12:15:53 crc kubenswrapper[4925]: I0202 12:15:53.798149 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-s7vgc/crc-debug-btgcw"] Feb 02 12:15:53 crc kubenswrapper[4925]: E0202 12:15:53.798584 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139c57d0-06c8-4ab5-9957-0f4ef8f8edd9" containerName="container-00" Feb 02 12:15:53 crc kubenswrapper[4925]: I0202 12:15:53.798601 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="139c57d0-06c8-4ab5-9957-0f4ef8f8edd9" containerName="container-00" Feb 02 12:15:53 crc kubenswrapper[4925]: I0202 12:15:53.798793 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="139c57d0-06c8-4ab5-9957-0f4ef8f8edd9" containerName="container-00" Feb 02 12:15:53 crc kubenswrapper[4925]: I0202 12:15:53.799505 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-s7vgc/crc-debug-btgcw" Feb 02 12:15:53 crc kubenswrapper[4925]: I0202 12:15:53.909784 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60acfb01-ddc4-491a-8cb4-4b352629cb7b-host\") pod \"crc-debug-btgcw\" (UID: \"60acfb01-ddc4-491a-8cb4-4b352629cb7b\") " pod="openshift-must-gather-s7vgc/crc-debug-btgcw" Feb 02 12:15:53 crc kubenswrapper[4925]: I0202 12:15:53.910322 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvk9h\" (UniqueName: \"kubernetes.io/projected/60acfb01-ddc4-491a-8cb4-4b352629cb7b-kube-api-access-xvk9h\") pod \"crc-debug-btgcw\" (UID: \"60acfb01-ddc4-491a-8cb4-4b352629cb7b\") " pod="openshift-must-gather-s7vgc/crc-debug-btgcw" Feb 02 12:15:54 crc kubenswrapper[4925]: I0202 12:15:54.012144 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60acfb01-ddc4-491a-8cb4-4b352629cb7b-host\") pod \"crc-debug-btgcw\" (UID: \"60acfb01-ddc4-491a-8cb4-4b352629cb7b\") " pod="openshift-must-gather-s7vgc/crc-debug-btgcw" Feb 02 12:15:54 crc kubenswrapper[4925]: I0202 12:15:54.012239 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvk9h\" (UniqueName: \"kubernetes.io/projected/60acfb01-ddc4-491a-8cb4-4b352629cb7b-kube-api-access-xvk9h\") pod \"crc-debug-btgcw\" (UID: \"60acfb01-ddc4-491a-8cb4-4b352629cb7b\") " pod="openshift-must-gather-s7vgc/crc-debug-btgcw" Feb 02 12:15:54 crc kubenswrapper[4925]: I0202 12:15:54.012286 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60acfb01-ddc4-491a-8cb4-4b352629cb7b-host\") pod \"crc-debug-btgcw\" (UID: \"60acfb01-ddc4-491a-8cb4-4b352629cb7b\") " pod="openshift-must-gather-s7vgc/crc-debug-btgcw" Feb 02 12:15:54 crc kubenswrapper[4925]: I0202 12:15:54.033656 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvk9h\" (UniqueName: \"kubernetes.io/projected/60acfb01-ddc4-491a-8cb4-4b352629cb7b-kube-api-access-xvk9h\") pod \"crc-debug-btgcw\" (UID: \"60acfb01-ddc4-491a-8cb4-4b352629cb7b\") " pod="openshift-must-gather-s7vgc/crc-debug-btgcw" Feb 02 12:15:54 crc kubenswrapper[4925]: I0202 12:15:54.117615 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-s7vgc/crc-debug-btgcw" Feb 02 12:15:54 crc kubenswrapper[4925]: I0202 12:15:54.494206 4925 generic.go:334] "Generic (PLEG): container finished" podID="60acfb01-ddc4-491a-8cb4-4b352629cb7b" containerID="704f63f59aaf68cfe80f295a0fbf179c711f51af15f31d4d7e10f761d79b7f41" exitCode=0 Feb 02 12:15:54 crc kubenswrapper[4925]: I0202 12:15:54.494278 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-s7vgc/crc-debug-btgcw" event={"ID":"60acfb01-ddc4-491a-8cb4-4b352629cb7b","Type":"ContainerDied","Data":"704f63f59aaf68cfe80f295a0fbf179c711f51af15f31d4d7e10f761d79b7f41"} Feb 02 12:15:54 crc kubenswrapper[4925]: I0202 12:15:54.494570 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-s7vgc/crc-debug-btgcw" event={"ID":"60acfb01-ddc4-491a-8cb4-4b352629cb7b","Type":"ContainerStarted","Data":"7099255d5136e5a0ab8b023e50bf32802c430d7798c67daadb60cb1d45296bcd"} Feb 02 12:15:54 crc kubenswrapper[4925]: I0202 12:15:54.542177 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-s7vgc/crc-debug-btgcw"] Feb 02 12:15:54 crc kubenswrapper[4925]: I0202 12:15:54.552321 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-s7vgc/crc-debug-btgcw"] Feb 02 12:15:54 crc kubenswrapper[4925]: I0202 12:15:54.688606 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="139c57d0-06c8-4ab5-9957-0f4ef8f8edd9" path="/var/lib/kubelet/pods/139c57d0-06c8-4ab5-9957-0f4ef8f8edd9/volumes" Feb 02 12:15:55 crc kubenswrapper[4925]: I0202 12:15:55.605124 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-s7vgc/crc-debug-btgcw" Feb 02 12:15:55 crc kubenswrapper[4925]: I0202 12:15:55.642506 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60acfb01-ddc4-491a-8cb4-4b352629cb7b-host\") pod \"60acfb01-ddc4-491a-8cb4-4b352629cb7b\" (UID: \"60acfb01-ddc4-491a-8cb4-4b352629cb7b\") " Feb 02 12:15:55 crc kubenswrapper[4925]: I0202 12:15:55.642750 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvk9h\" (UniqueName: \"kubernetes.io/projected/60acfb01-ddc4-491a-8cb4-4b352629cb7b-kube-api-access-xvk9h\") pod \"60acfb01-ddc4-491a-8cb4-4b352629cb7b\" (UID: \"60acfb01-ddc4-491a-8cb4-4b352629cb7b\") " Feb 02 12:15:55 crc kubenswrapper[4925]: I0202 12:15:55.642880 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60acfb01-ddc4-491a-8cb4-4b352629cb7b-host" (OuterVolumeSpecName: "host") pod "60acfb01-ddc4-491a-8cb4-4b352629cb7b" (UID: "60acfb01-ddc4-491a-8cb4-4b352629cb7b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 12:15:55 crc kubenswrapper[4925]: I0202 12:15:55.643339 4925 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60acfb01-ddc4-491a-8cb4-4b352629cb7b-host\") on node \"crc\" DevicePath \"\"" Feb 02 12:15:55 crc kubenswrapper[4925]: I0202 12:15:55.647440 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60acfb01-ddc4-491a-8cb4-4b352629cb7b-kube-api-access-xvk9h" (OuterVolumeSpecName: "kube-api-access-xvk9h") pod "60acfb01-ddc4-491a-8cb4-4b352629cb7b" (UID: "60acfb01-ddc4-491a-8cb4-4b352629cb7b"). InnerVolumeSpecName "kube-api-access-xvk9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:15:55 crc kubenswrapper[4925]: I0202 12:15:55.744917 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvk9h\" (UniqueName: \"kubernetes.io/projected/60acfb01-ddc4-491a-8cb4-4b352629cb7b-kube-api-access-xvk9h\") on node \"crc\" DevicePath \"\"" Feb 02 12:15:56 crc kubenswrapper[4925]: I0202 12:15:56.511039 4925 scope.go:117] "RemoveContainer" containerID="704f63f59aaf68cfe80f295a0fbf179c711f51af15f31d4d7e10f761d79b7f41" Feb 02 12:15:56 crc kubenswrapper[4925]: I0202 12:15:56.511166 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-s7vgc/crc-debug-btgcw" Feb 02 12:15:56 crc kubenswrapper[4925]: I0202 12:15:56.673675 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60acfb01-ddc4-491a-8cb4-4b352629cb7b" path="/var/lib/kubelet/pods/60acfb01-ddc4-491a-8cb4-4b352629cb7b/volumes" Feb 02 12:16:06 crc kubenswrapper[4925]: I0202 12:16:06.797372 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tnvmf"] Feb 02 12:16:06 crc kubenswrapper[4925]: E0202 12:16:06.798474 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60acfb01-ddc4-491a-8cb4-4b352629cb7b" containerName="container-00" Feb 02 12:16:06 crc kubenswrapper[4925]: I0202 12:16:06.798494 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="60acfb01-ddc4-491a-8cb4-4b352629cb7b" containerName="container-00" Feb 02 12:16:06 crc kubenswrapper[4925]: I0202 12:16:06.798743 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="60acfb01-ddc4-491a-8cb4-4b352629cb7b" containerName="container-00" Feb 02 12:16:06 crc kubenswrapper[4925]: I0202 12:16:06.800394 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tnvmf" Feb 02 12:16:06 crc kubenswrapper[4925]: I0202 12:16:06.806451 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tnvmf"] Feb 02 12:16:06 crc kubenswrapper[4925]: I0202 12:16:06.914322 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66739257-570b-4632-803f-6cf8fb5a1467-utilities\") pod \"redhat-marketplace-tnvmf\" (UID: \"66739257-570b-4632-803f-6cf8fb5a1467\") " pod="openshift-marketplace/redhat-marketplace-tnvmf" Feb 02 12:16:06 crc kubenswrapper[4925]: I0202 12:16:06.914697 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d88xv\" (UniqueName: \"kubernetes.io/projected/66739257-570b-4632-803f-6cf8fb5a1467-kube-api-access-d88xv\") pod \"redhat-marketplace-tnvmf\" (UID: \"66739257-570b-4632-803f-6cf8fb5a1467\") " pod="openshift-marketplace/redhat-marketplace-tnvmf" Feb 02 12:16:06 crc kubenswrapper[4925]: I0202 12:16:06.914735 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66739257-570b-4632-803f-6cf8fb5a1467-catalog-content\") pod \"redhat-marketplace-tnvmf\" (UID: \"66739257-570b-4632-803f-6cf8fb5a1467\") " pod="openshift-marketplace/redhat-marketplace-tnvmf" Feb 02 12:16:07 crc kubenswrapper[4925]: I0202 12:16:07.016633 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66739257-570b-4632-803f-6cf8fb5a1467-catalog-content\") pod \"redhat-marketplace-tnvmf\" (UID: \"66739257-570b-4632-803f-6cf8fb5a1467\") " pod="openshift-marketplace/redhat-marketplace-tnvmf" Feb 02 12:16:07 crc kubenswrapper[4925]: I0202 12:16:07.016744 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66739257-570b-4632-803f-6cf8fb5a1467-utilities\") pod \"redhat-marketplace-tnvmf\" (UID: \"66739257-570b-4632-803f-6cf8fb5a1467\") " pod="openshift-marketplace/redhat-marketplace-tnvmf" Feb 02 12:16:07 crc kubenswrapper[4925]: I0202 12:16:07.016900 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d88xv\" (UniqueName: \"kubernetes.io/projected/66739257-570b-4632-803f-6cf8fb5a1467-kube-api-access-d88xv\") pod \"redhat-marketplace-tnvmf\" (UID: \"66739257-570b-4632-803f-6cf8fb5a1467\") " pod="openshift-marketplace/redhat-marketplace-tnvmf" Feb 02 12:16:07 crc kubenswrapper[4925]: I0202 12:16:07.017355 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66739257-570b-4632-803f-6cf8fb5a1467-utilities\") pod \"redhat-marketplace-tnvmf\" (UID: \"66739257-570b-4632-803f-6cf8fb5a1467\") " pod="openshift-marketplace/redhat-marketplace-tnvmf" Feb 02 12:16:07 crc kubenswrapper[4925]: I0202 12:16:07.017364 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66739257-570b-4632-803f-6cf8fb5a1467-catalog-content\") pod \"redhat-marketplace-tnvmf\" (UID: \"66739257-570b-4632-803f-6cf8fb5a1467\") " pod="openshift-marketplace/redhat-marketplace-tnvmf" Feb 02 12:16:07 crc kubenswrapper[4925]: I0202 12:16:07.038441 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d88xv\" (UniqueName: \"kubernetes.io/projected/66739257-570b-4632-803f-6cf8fb5a1467-kube-api-access-d88xv\") pod \"redhat-marketplace-tnvmf\" (UID: \"66739257-570b-4632-803f-6cf8fb5a1467\") " pod="openshift-marketplace/redhat-marketplace-tnvmf" Feb 02 12:16:07 crc kubenswrapper[4925]: I0202 12:16:07.128396 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tnvmf" Feb 02 12:16:07 crc kubenswrapper[4925]: I0202 12:16:07.634728 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tnvmf"] Feb 02 12:16:08 crc kubenswrapper[4925]: I0202 12:16:08.619030 4925 generic.go:334] "Generic (PLEG): container finished" podID="66739257-570b-4632-803f-6cf8fb5a1467" containerID="cfab6d0993f803b84bef303167ac56be78dc0396cac01315f08e87d6799c7d22" exitCode=0 Feb 02 12:16:08 crc kubenswrapper[4925]: I0202 12:16:08.619115 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tnvmf" event={"ID":"66739257-570b-4632-803f-6cf8fb5a1467","Type":"ContainerDied","Data":"cfab6d0993f803b84bef303167ac56be78dc0396cac01315f08e87d6799c7d22"} Feb 02 12:16:08 crc kubenswrapper[4925]: I0202 12:16:08.620470 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tnvmf" event={"ID":"66739257-570b-4632-803f-6cf8fb5a1467","Type":"ContainerStarted","Data":"41f8851f6ddaa0c1cdee8dee08b87390edc1996d6b9a037dbb41b8d3b71122a1"} Feb 02 12:16:08 crc kubenswrapper[4925]: I0202 12:16:08.632756 4925 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 12:16:10 crc kubenswrapper[4925]: I0202 12:16:10.637938 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tnvmf" event={"ID":"66739257-570b-4632-803f-6cf8fb5a1467","Type":"ContainerStarted","Data":"8fc78bbad7f7707fa388cd9adbf9f2174c612f309a53fb7d3514f5f697136409"} Feb 02 12:16:11 crc kubenswrapper[4925]: I0202 12:16:11.648876 4925 generic.go:334] "Generic (PLEG): container finished" podID="66739257-570b-4632-803f-6cf8fb5a1467" containerID="8fc78bbad7f7707fa388cd9adbf9f2174c612f309a53fb7d3514f5f697136409" exitCode=0 Feb 02 12:16:11 crc kubenswrapper[4925]: I0202 12:16:11.649025 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tnvmf" event={"ID":"66739257-570b-4632-803f-6cf8fb5a1467","Type":"ContainerDied","Data":"8fc78bbad7f7707fa388cd9adbf9f2174c612f309a53fb7d3514f5f697136409"} Feb 02 12:16:12 crc kubenswrapper[4925]: I0202 12:16:12.660450 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tnvmf" event={"ID":"66739257-570b-4632-803f-6cf8fb5a1467","Type":"ContainerStarted","Data":"70187cb12aaf487c44b05bb88ce7144104fec5d72cf54e460fb3a4eab50efced"} Feb 02 12:16:12 crc kubenswrapper[4925]: I0202 12:16:12.680521 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tnvmf" podStartSLOduration=3.248184967 podStartE2EDuration="6.680497254s" podCreationTimestamp="2026-02-02 12:16:06 +0000 UTC" firstStartedPulling="2026-02-02 12:16:08.632494835 +0000 UTC m=+4745.636743797" lastFinishedPulling="2026-02-02 12:16:12.064807132 +0000 UTC m=+4749.069056084" observedRunningTime="2026-02-02 12:16:12.679408684 +0000 UTC m=+4749.683657656" watchObservedRunningTime="2026-02-02 12:16:12.680497254 +0000 UTC m=+4749.684746216" Feb 02 12:16:17 crc kubenswrapper[4925]: I0202 12:16:17.128741 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tnvmf" Feb 02 12:16:17 crc kubenswrapper[4925]: I0202 12:16:17.129628 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tnvmf" Feb 02 12:16:17 crc kubenswrapper[4925]: I0202 12:16:17.176700 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tnvmf" Feb 02 12:16:17 crc kubenswrapper[4925]: I0202 12:16:17.754479 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tnvmf" Feb 02 12:16:17 crc kubenswrapper[4925]: I0202 12:16:17.804259 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tnvmf"] Feb 02 12:16:19 crc kubenswrapper[4925]: I0202 12:16:19.716925 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tnvmf" podUID="66739257-570b-4632-803f-6cf8fb5a1467" containerName="registry-server" containerID="cri-o://70187cb12aaf487c44b05bb88ce7144104fec5d72cf54e460fb3a4eab50efced" gracePeriod=2 Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.164607 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tnvmf" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.274304 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d88xv\" (UniqueName: \"kubernetes.io/projected/66739257-570b-4632-803f-6cf8fb5a1467-kube-api-access-d88xv\") pod \"66739257-570b-4632-803f-6cf8fb5a1467\" (UID: \"66739257-570b-4632-803f-6cf8fb5a1467\") " Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.274522 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66739257-570b-4632-803f-6cf8fb5a1467-catalog-content\") pod \"66739257-570b-4632-803f-6cf8fb5a1467\" (UID: \"66739257-570b-4632-803f-6cf8fb5a1467\") " Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.274602 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66739257-570b-4632-803f-6cf8fb5a1467-utilities\") pod \"66739257-570b-4632-803f-6cf8fb5a1467\" (UID: \"66739257-570b-4632-803f-6cf8fb5a1467\") " Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.276564 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66739257-570b-4632-803f-6cf8fb5a1467-utilities" (OuterVolumeSpecName: "utilities") pod "66739257-570b-4632-803f-6cf8fb5a1467" (UID: "66739257-570b-4632-803f-6cf8fb5a1467"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.282031 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66739257-570b-4632-803f-6cf8fb5a1467-kube-api-access-d88xv" (OuterVolumeSpecName: "kube-api-access-d88xv") pod "66739257-570b-4632-803f-6cf8fb5a1467" (UID: "66739257-570b-4632-803f-6cf8fb5a1467"). InnerVolumeSpecName "kube-api-access-d88xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.304356 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66739257-570b-4632-803f-6cf8fb5a1467-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66739257-570b-4632-803f-6cf8fb5a1467" (UID: "66739257-570b-4632-803f-6cf8fb5a1467"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.344276 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8697ffdb94-2bnsl_38619cef-521e-4e12-9919-8846bed56c10/barbican-api/0.log" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.377723 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66739257-570b-4632-803f-6cf8fb5a1467-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.377761 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66739257-570b-4632-803f-6cf8fb5a1467-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.377773 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d88xv\" (UniqueName: \"kubernetes.io/projected/66739257-570b-4632-803f-6cf8fb5a1467-kube-api-access-d88xv\") on node \"crc\" DevicePath \"\"" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.565975 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cb85bfdc6-wzdz4_461effdf-7e6d-47d3-85f8-eac7940d2100/barbican-keystone-listener/0.log" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.574794 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8697ffdb94-2bnsl_38619cef-521e-4e12-9919-8846bed56c10/barbican-api-log/0.log" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.727938 4925 generic.go:334] "Generic (PLEG): container finished" podID="66739257-570b-4632-803f-6cf8fb5a1467" containerID="70187cb12aaf487c44b05bb88ce7144104fec5d72cf54e460fb3a4eab50efced" exitCode=0 Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.727977 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tnvmf" event={"ID":"66739257-570b-4632-803f-6cf8fb5a1467","Type":"ContainerDied","Data":"70187cb12aaf487c44b05bb88ce7144104fec5d72cf54e460fb3a4eab50efced"} Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.728002 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tnvmf" event={"ID":"66739257-570b-4632-803f-6cf8fb5a1467","Type":"ContainerDied","Data":"41f8851f6ddaa0c1cdee8dee08b87390edc1996d6b9a037dbb41b8d3b71122a1"} Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.728018 4925 scope.go:117] "RemoveContainer" containerID="70187cb12aaf487c44b05bb88ce7144104fec5d72cf54e460fb3a4eab50efced" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.728140 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tnvmf" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.760250 4925 scope.go:117] "RemoveContainer" containerID="8fc78bbad7f7707fa388cd9adbf9f2174c612f309a53fb7d3514f5f697136409" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.768318 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tnvmf"] Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.778454 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tnvmf"] Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.790381 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cb85bfdc6-wzdz4_461effdf-7e6d-47d3-85f8-eac7940d2100/barbican-keystone-listener-log/0.log" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.815705 4925 scope.go:117] "RemoveContainer" containerID="cfab6d0993f803b84bef303167ac56be78dc0396cac01315f08e87d6799c7d22" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.821598 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7f574dbb79-fc5vn_604a4d9b-a323-464c-b7f4-e41503e992f4/barbican-worker/0.log" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.853587 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7f574dbb79-fc5vn_604a4d9b-a323-464c-b7f4-e41503e992f4/barbican-worker-log/0.log" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.855406 4925 scope.go:117] "RemoveContainer" containerID="70187cb12aaf487c44b05bb88ce7144104fec5d72cf54e460fb3a4eab50efced" Feb 02 12:16:20 crc kubenswrapper[4925]: E0202 12:16:20.855813 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70187cb12aaf487c44b05bb88ce7144104fec5d72cf54e460fb3a4eab50efced\": container with ID starting with 70187cb12aaf487c44b05bb88ce7144104fec5d72cf54e460fb3a4eab50efced not found: ID does not exist" containerID="70187cb12aaf487c44b05bb88ce7144104fec5d72cf54e460fb3a4eab50efced" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.855841 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70187cb12aaf487c44b05bb88ce7144104fec5d72cf54e460fb3a4eab50efced"} err="failed to get container status \"70187cb12aaf487c44b05bb88ce7144104fec5d72cf54e460fb3a4eab50efced\": rpc error: code = NotFound desc = could not find container \"70187cb12aaf487c44b05bb88ce7144104fec5d72cf54e460fb3a4eab50efced\": container with ID starting with 70187cb12aaf487c44b05bb88ce7144104fec5d72cf54e460fb3a4eab50efced not found: ID does not exist" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.855863 4925 scope.go:117] "RemoveContainer" containerID="8fc78bbad7f7707fa388cd9adbf9f2174c612f309a53fb7d3514f5f697136409" Feb 02 12:16:20 crc kubenswrapper[4925]: E0202 12:16:20.857628 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fc78bbad7f7707fa388cd9adbf9f2174c612f309a53fb7d3514f5f697136409\": container with ID starting with 8fc78bbad7f7707fa388cd9adbf9f2174c612f309a53fb7d3514f5f697136409 not found: ID does not exist" containerID="8fc78bbad7f7707fa388cd9adbf9f2174c612f309a53fb7d3514f5f697136409" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.857654 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fc78bbad7f7707fa388cd9adbf9f2174c612f309a53fb7d3514f5f697136409"} err="failed to get container status \"8fc78bbad7f7707fa388cd9adbf9f2174c612f309a53fb7d3514f5f697136409\": rpc error: code = NotFound desc = could not find container \"8fc78bbad7f7707fa388cd9adbf9f2174c612f309a53fb7d3514f5f697136409\": container with ID starting with 8fc78bbad7f7707fa388cd9adbf9f2174c612f309a53fb7d3514f5f697136409 not found: ID does not exist" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.857670 4925 scope.go:117] "RemoveContainer" containerID="cfab6d0993f803b84bef303167ac56be78dc0396cac01315f08e87d6799c7d22" Feb 02 12:16:20 crc kubenswrapper[4925]: E0202 12:16:20.858010 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfab6d0993f803b84bef303167ac56be78dc0396cac01315f08e87d6799c7d22\": container with ID starting with cfab6d0993f803b84bef303167ac56be78dc0396cac01315f08e87d6799c7d22 not found: ID does not exist" containerID="cfab6d0993f803b84bef303167ac56be78dc0396cac01315f08e87d6799c7d22" Feb 02 12:16:20 crc kubenswrapper[4925]: I0202 12:16:20.858032 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfab6d0993f803b84bef303167ac56be78dc0396cac01315f08e87d6799c7d22"} err="failed to get container status \"cfab6d0993f803b84bef303167ac56be78dc0396cac01315f08e87d6799c7d22\": rpc error: code = NotFound desc = could not find container \"cfab6d0993f803b84bef303167ac56be78dc0396cac01315f08e87d6799c7d22\": container with ID starting with cfab6d0993f803b84bef303167ac56be78dc0396cac01315f08e87d6799c7d22 not found: ID does not exist" Feb 02 12:16:21 crc kubenswrapper[4925]: I0202 12:16:21.051923 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w_4a342fe3-c33f-4a54-a59f-9bba07acc904/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:16:21 crc kubenswrapper[4925]: I0202 12:16:21.096294 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8803b0fe-f2e6-41bd-b2e8-b970178ff360/ceilometer-central-agent/0.log" Feb 02 12:16:21 crc kubenswrapper[4925]: I0202 12:16:21.233561 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8803b0fe-f2e6-41bd-b2e8-b970178ff360/proxy-httpd/0.log" Feb 02 12:16:21 crc kubenswrapper[4925]: I0202 12:16:21.241720 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8803b0fe-f2e6-41bd-b2e8-b970178ff360/ceilometer-notification-agent/0.log" Feb 02 12:16:21 crc kubenswrapper[4925]: I0202 12:16:21.286132 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8803b0fe-f2e6-41bd-b2e8-b970178ff360/sg-core/0.log" Feb 02 12:16:21 crc kubenswrapper[4925]: I0202 12:16:21.433402 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr_c87ae68d-67eb-45b4-8971-5d5d14d6c36b/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:16:21 crc kubenswrapper[4925]: I0202 12:16:21.475903 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2_600dd95b-ee69-45e7-918b-85650f9e2980/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:16:22 crc kubenswrapper[4925]: I0202 12:16:22.094069 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_161c8104-b092-42f2-8e76-513b0e7991d6/probe/0.log" Feb 02 12:16:22 crc kubenswrapper[4925]: I0202 12:16:22.317524 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_314e8cb6-036e-4365-9056-026caca906f1/cinder-api-log/0.log" Feb 02 12:16:22 crc kubenswrapper[4925]: I0202 12:16:22.352453 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_314e8cb6-036e-4365-9056-026caca906f1/cinder-api/0.log" Feb 02 12:16:22 crc kubenswrapper[4925]: I0202 12:16:22.574668 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2a0d1352-9215-4b6e-831a-d9d654cc8a1e/cinder-scheduler/0.log" Feb 02 12:16:22 crc kubenswrapper[4925]: I0202 12:16:22.590007 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2a0d1352-9215-4b6e-831a-d9d654cc8a1e/probe/0.log" Feb 02 12:16:22 crc kubenswrapper[4925]: I0202 12:16:22.674762 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66739257-570b-4632-803f-6cf8fb5a1467" path="/var/lib/kubelet/pods/66739257-570b-4632-803f-6cf8fb5a1467/volumes" Feb 02 12:16:23 crc kubenswrapper[4925]: I0202 12:16:23.147438 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_161c8104-b092-42f2-8e76-513b0e7991d6/cinder-backup/0.log" Feb 02 12:16:23 crc kubenswrapper[4925]: I0202 12:16:23.388969 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_0e7f2d02-9fa4-4e06-a6ae-77c1390e574b/probe/0.log" Feb 02 12:16:23 crc kubenswrapper[4925]: I0202 12:16:23.488703 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg_34087aed-542d-424c-a71e-a277cf32d94c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:16:23 crc kubenswrapper[4925]: I0202 12:16:23.814858 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-csw6t_a07f6a0e-2ed8-4213-be0f-ed8ae1005a14/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:16:23 crc kubenswrapper[4925]: I0202 12:16:23.998214 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-z5snj_151bbe9a-f79f-475b-88ad-1337e6ec9312/init/0.log" Feb 02 12:16:24 crc kubenswrapper[4925]: I0202 12:16:24.189431 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-z5snj_151bbe9a-f79f-475b-88ad-1337e6ec9312/init/0.log" Feb 02 12:16:24 crc kubenswrapper[4925]: I0202 12:16:24.581844 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_29d38bf8-6523-4fe2-9fb9-7385f5ea31bf/glance-httpd/0.log" Feb 02 12:16:24 crc kubenswrapper[4925]: I0202 12:16:24.609290 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-z5snj_151bbe9a-f79f-475b-88ad-1337e6ec9312/dnsmasq-dns/0.log" Feb 02 12:16:25 crc kubenswrapper[4925]: I0202 12:16:25.183751 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_29d38bf8-6523-4fe2-9fb9-7385f5ea31bf/glance-log/0.log" Feb 02 12:16:25 crc kubenswrapper[4925]: I0202 12:16:25.196763 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_99d0cf5b-0a90-49c5-8302-4401070f1c3c/glance-log/0.log" Feb 02 12:16:25 crc kubenswrapper[4925]: I0202 12:16:25.231862 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_99d0cf5b-0a90-49c5-8302-4401070f1c3c/glance-httpd/0.log" Feb 02 12:16:25 crc kubenswrapper[4925]: I0202 12:16:25.551842 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-c6d58558b-gh6c8_acc24fd1-e3f5-4235-9190-c9aad51e4282/horizon/0.log" Feb 02 12:16:25 crc kubenswrapper[4925]: I0202 12:16:25.677590 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg_749615c6-2bdb-4b47-aced-b8dcb3041df6/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:16:25 crc kubenswrapper[4925]: I0202 12:16:25.780063 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-c6d58558b-gh6c8_acc24fd1-e3f5-4235-9190-c9aad51e4282/horizon-log/0.log" Feb 02 12:16:25 crc kubenswrapper[4925]: I0202 12:16:25.805624 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-pp5mc_ef8d17fd-9d76-4856-9308-9d7630003827/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:16:26 crc kubenswrapper[4925]: I0202 12:16:26.118683 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29500561-khb48_90190a5e-2227-47b8-83e8-4f3f26891a14/keystone-cron/0.log" Feb 02 12:16:26 crc kubenswrapper[4925]: I0202 12:16:26.379588 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f/kube-state-metrics/0.log" Feb 02 12:16:26 crc kubenswrapper[4925]: I0202 12:16:26.439978 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-b5rct_8e448d5d-ae77-439d-804b-eb4bea2a957d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:16:26 crc kubenswrapper[4925]: I0202 12:16:26.641802 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e/manila-api-log/0.log" Feb 02 12:16:26 crc kubenswrapper[4925]: I0202 12:16:26.816398 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e/manila-api/0.log" Feb 02 12:16:26 crc kubenswrapper[4925]: I0202 12:16:26.905048 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7878d757f7-z5tzg_c10f0dec-2709-40e9-90ce-ad8698d98599/keystone-api/0.log" Feb 02 12:16:26 crc kubenswrapper[4925]: I0202 12:16:26.953845 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_4a002372-3866-4b2a-8f00-f5ae284f9e62/manila-scheduler/0.log" Feb 02 12:16:26 crc kubenswrapper[4925]: I0202 12:16:26.995483 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_4a002372-3866-4b2a-8f00-f5ae284f9e62/probe/0.log" Feb 02 12:16:27 crc kubenswrapper[4925]: I0202 12:16:27.165574 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_b6f74fd7-2cf3-4fc7-9535-50503f677c96/probe/0.log" Feb 02 12:16:27 crc kubenswrapper[4925]: I0202 12:16:27.250027 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_b6f74fd7-2cf3-4fc7-9535-50503f677c96/manila-share/0.log" Feb 02 12:16:27 crc kubenswrapper[4925]: I0202 12:16:27.694056 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7755c4bbbc-qkg7f_66b56382-6514-4567-8b82-42454f43f8d1/neutron-httpd/0.log" Feb 02 12:16:27 crc kubenswrapper[4925]: I0202 12:16:27.850788 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7755c4bbbc-qkg7f_66b56382-6514-4567-8b82-42454f43f8d1/neutron-api/0.log" Feb 02 12:16:27 crc kubenswrapper[4925]: I0202 12:16:27.865109 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds_865424ac-9ae9-45a6-9f69-b239f8d3d746/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:16:28 crc kubenswrapper[4925]: I0202 12:16:28.641315 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c/nova-cell0-conductor-conductor/0.log" Feb 02 12:16:28 crc kubenswrapper[4925]: I0202 12:16:28.857975 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2526e2c5-e58e-4e8a-b55d-ec5d06a490d1/nova-api-log/0.log" Feb 02 12:16:29 crc kubenswrapper[4925]: I0202 12:16:29.183129 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_be99b255-6467-42af-bb3b-4e6d05fccc64/nova-cell1-conductor-conductor/0.log" Feb 02 12:16:29 crc kubenswrapper[4925]: I0202 12:16:29.263484 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2526e2c5-e58e-4e8a-b55d-ec5d06a490d1/nova-api-api/0.log" Feb 02 12:16:29 crc kubenswrapper[4925]: I0202 12:16:29.477462 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f7ad506b-3504-4825-9ae1-94937ca48d1a/nova-cell1-novncproxy-novncproxy/0.log" Feb 02 12:16:29 crc kubenswrapper[4925]: I0202 12:16:29.537207 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd_374c9a22-b870-43ee-a27a-499a0d607e32/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:16:29 crc kubenswrapper[4925]: I0202 12:16:29.865463 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3287d1a5-371d-44d3-a215-6937bf4da1a1/nova-metadata-log/0.log" Feb 02 12:16:30 crc kubenswrapper[4925]: I0202 12:16:30.349371 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_54598a97-2180-4fe5-a267-970c64919ba0/nova-scheduler-scheduler/0.log" Feb 02 12:16:30 crc kubenswrapper[4925]: I0202 12:16:30.467021 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_64d4545e-f93a-4767-bba7-d01bcaf43c4f/mysql-bootstrap/0.log" Feb 02 12:16:30 crc kubenswrapper[4925]: I0202 12:16:30.718776 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_64d4545e-f93a-4767-bba7-d01bcaf43c4f/mysql-bootstrap/0.log" Feb 02 12:16:30 crc kubenswrapper[4925]: I0202 12:16:30.732371 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_64d4545e-f93a-4767-bba7-d01bcaf43c4f/galera/0.log" Feb 02 12:16:30 crc kubenswrapper[4925]: I0202 12:16:30.981387 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d99509bd-1ed8-4516-8ed2-8d99b8e33c67/mysql-bootstrap/0.log" Feb 02 12:16:31 crc kubenswrapper[4925]: I0202 12:16:31.224050 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d99509bd-1ed8-4516-8ed2-8d99b8e33c67/mysql-bootstrap/0.log" Feb 02 12:16:31 crc kubenswrapper[4925]: I0202 12:16:31.289611 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d99509bd-1ed8-4516-8ed2-8d99b8e33c67/galera/0.log" Feb 02 12:16:31 crc kubenswrapper[4925]: I0202 12:16:31.471989 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_515789df-211a-4465-8f1f-5ab3dadcb813/openstackclient/0.log" Feb 02 12:16:31 crc kubenswrapper[4925]: I0202 12:16:31.675639 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gr5rf_feb2b36a-609f-4805-8b50-fe0731522375/ovn-controller/0.log" Feb 02 12:16:31 crc kubenswrapper[4925]: I0202 12:16:31.719651 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_0e7f2d02-9fa4-4e06-a6ae-77c1390e574b/cinder-volume/0.log" Feb 02 12:16:31 crc kubenswrapper[4925]: I0202 12:16:31.949269 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-r55bh_952fc6ba-02b5-4a94-90b4-2a206213f818/openstack-network-exporter/0.log" Feb 02 12:16:31 crc kubenswrapper[4925]: I0202 12:16:31.957933 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3287d1a5-371d-44d3-a215-6937bf4da1a1/nova-metadata-metadata/0.log" Feb 02 12:16:32 crc kubenswrapper[4925]: I0202 12:16:32.128816 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-26w5s_d118fb79-debc-4d5d-b390-38f913681237/ovsdb-server-init/0.log" Feb 02 12:16:32 crc kubenswrapper[4925]: I0202 12:16:32.351575 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-26w5s_d118fb79-debc-4d5d-b390-38f913681237/ovsdb-server/0.log" Feb 02 12:16:32 crc kubenswrapper[4925]: I0202 12:16:32.394831 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-26w5s_d118fb79-debc-4d5d-b390-38f913681237/ovsdb-server-init/0.log" Feb 02 12:16:32 crc kubenswrapper[4925]: I0202 12:16:32.431117 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-26w5s_d118fb79-debc-4d5d-b390-38f913681237/ovs-vswitchd/0.log" Feb 02 12:16:32 crc kubenswrapper[4925]: I0202 12:16:32.596227 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-sv6hb_9b644239-1d8a-4dd1-96ab-6125f8ccb4e2/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:16:32 crc kubenswrapper[4925]: I0202 12:16:32.749134 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3b370a7b-282f-4481-8275-39c981b54f35/openstack-network-exporter/0.log" Feb 02 12:16:32 crc kubenswrapper[4925]: I0202 12:16:32.749257 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3b370a7b-282f-4481-8275-39c981b54f35/ovn-northd/0.log" Feb 02 12:16:32 crc kubenswrapper[4925]: I0202 12:16:32.879894 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0798cb6a-03c0-467e-b65f-05612b9213d3/openstack-network-exporter/0.log" Feb 02 12:16:33 crc kubenswrapper[4925]: I0202 12:16:33.015381 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0798cb6a-03c0-467e-b65f-05612b9213d3/ovsdbserver-nb/0.log" Feb 02 12:16:33 crc kubenswrapper[4925]: I0202 12:16:33.127007 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ab8a8eaa-8f11-490e-9251-e4d34b8c481b/ovsdbserver-sb/0.log" Feb 02 12:16:33 crc kubenswrapper[4925]: I0202 12:16:33.149803 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ab8a8eaa-8f11-490e-9251-e4d34b8c481b/openstack-network-exporter/0.log" Feb 02 12:16:33 crc kubenswrapper[4925]: I0202 12:16:33.392751 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-596c688466-nwnv5_97b44970-d770-46b9-9c10-a8ec03d3bbaf/placement-api/0.log" Feb 02 12:16:34 crc kubenswrapper[4925]: I0202 12:16:34.063859 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f584c201-5eae-46d6-a9c1-b360f5506d24/setup-container/0.log" Feb 02 12:16:34 crc kubenswrapper[4925]: I0202 12:16:34.180540 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f584c201-5eae-46d6-a9c1-b360f5506d24/setup-container/0.log" Feb 02 12:16:34 crc kubenswrapper[4925]: I0202 12:16:34.220514 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-596c688466-nwnv5_97b44970-d770-46b9-9c10-a8ec03d3bbaf/placement-log/0.log" Feb 02 12:16:34 crc kubenswrapper[4925]: I0202 12:16:34.257220 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f584c201-5eae-46d6-a9c1-b360f5506d24/rabbitmq/0.log" Feb 02 12:16:34 crc kubenswrapper[4925]: I0202 12:16:34.412980 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f988cc52-4086-4387-971c-ecd4837c512c/setup-container/0.log" Feb 02 12:16:34 crc kubenswrapper[4925]: I0202 12:16:34.565742 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f988cc52-4086-4387-971c-ecd4837c512c/setup-container/0.log" Feb 02 12:16:34 crc kubenswrapper[4925]: I0202 12:16:34.602718 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f988cc52-4086-4387-971c-ecd4837c512c/rabbitmq/0.log" Feb 02 12:16:34 crc kubenswrapper[4925]: I0202 12:16:34.654041 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx_4c793349-e8e5-419c-9e2c-4d3e4dd7500c/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:16:34 crc kubenswrapper[4925]: I0202 12:16:34.855504 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q_92c7fc53-ac73-4641-90de-b290231ea6a9/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:16:34 crc kubenswrapper[4925]: I0202 12:16:34.927512 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-rdnbg_dd7acac2-73fe-4a28-853a-8455a8b7ddcc/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:16:35 crc kubenswrapper[4925]: I0202 12:16:35.090621 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-nl7n4_11befe83-a359-400c-b072-1778f7c29f74/ssh-known-hosts-edpm-deployment/0.log" Feb 02 12:16:35 crc kubenswrapper[4925]: I0202 12:16:35.267504 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_7390b503-a9bf-41e3-9506-1f63b8ad6d7d/tempest-tests-tempest-tests-runner/0.log" Feb 02 12:16:35 crc kubenswrapper[4925]: I0202 12:16:35.312023 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_fdd0c34b-7157-4648-ae8e-de13e12bcaed/test-operator-logs-container/0.log" Feb 02 12:16:35 crc kubenswrapper[4925]: I0202 12:16:35.957921 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-zb42p_1d4b3b51-6672-4310-92e9-5a5c88c192ba/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:16:43 crc kubenswrapper[4925]: I0202 12:16:43.317626 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_6bf67c7c-0e93-499e-9530-735520afac74/memcached/0.log" Feb 02 12:17:02 crc kubenswrapper[4925]: I0202 12:17:02.072901 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr_d2b04846-e7f8-4fe5-9878-8cc586d96b5a/util/0.log" Feb 02 12:17:02 crc kubenswrapper[4925]: I0202 12:17:02.245894 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr_d2b04846-e7f8-4fe5-9878-8cc586d96b5a/util/0.log" Feb 02 12:17:02 crc kubenswrapper[4925]: I0202 12:17:02.292275 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr_d2b04846-e7f8-4fe5-9878-8cc586d96b5a/pull/0.log" Feb 02 12:17:02 crc kubenswrapper[4925]: I0202 12:17:02.292569 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr_d2b04846-e7f8-4fe5-9878-8cc586d96b5a/pull/0.log" Feb 02 12:17:02 crc kubenswrapper[4925]: I0202 12:17:02.432873 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr_d2b04846-e7f8-4fe5-9878-8cc586d96b5a/util/0.log" Feb 02 12:17:02 crc kubenswrapper[4925]: I0202 12:17:02.434098 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr_d2b04846-e7f8-4fe5-9878-8cc586d96b5a/pull/0.log" Feb 02 12:17:02 crc kubenswrapper[4925]: I0202 12:17:02.503538 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr_d2b04846-e7f8-4fe5-9878-8cc586d96b5a/extract/0.log" Feb 02 12:17:02 crc kubenswrapper[4925]: I0202 12:17:02.691465 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-5qxgq_271532e8-0b2a-40bc-b982-56e6c0c706dc/manager/0.log" Feb 02 12:17:02 crc kubenswrapper[4925]: I0202 12:17:02.849050 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-zvg88_6f64f1b5-8b8f-48b6-934c-5d148565b151/manager/0.log" Feb 02 12:17:03 crc kubenswrapper[4925]: I0202 12:17:03.100041 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-swkbc_e6ccf8c1-dcaf-49c7-84d9-dada6d7fec73/manager/0.log" Feb 02 12:17:03 crc kubenswrapper[4925]: I0202 12:17:03.668040 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-wggcm_2670eaa9-d6c1-479d-98d1-9a86c0c09305/manager/0.log" Feb 02 12:17:03 crc kubenswrapper[4925]: I0202 12:17:03.975683 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-mfxvn_8405a39c-7526-47b8-93b8-b9bb03cb970b/manager/0.log" Feb 02 12:17:04 crc kubenswrapper[4925]: I0202 12:17:04.148745 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-fgf8c_714728e3-dda9-47d3-aca5-c9bf8a13c2eb/manager/0.log" Feb 02 12:17:04 crc kubenswrapper[4925]: I0202 12:17:04.444967 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-m9rb5_9b6aadaa-89ca-46f2-bf48-59726671b789/manager/0.log" Feb 02 12:17:04 crc kubenswrapper[4925]: I0202 12:17:04.470211 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-kbc5t_a8a71810-ebcf-4908-8e41-73fdce287188/manager/0.log" Feb 02 12:17:04 crc kubenswrapper[4925]: I0202 12:17:04.514151 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-56b8d567c6-9sb76_057f6b87-28a7-46c6-8d51-c32937d77a6a/manager/0.log" Feb 02 12:17:04 crc kubenswrapper[4925]: I0202 12:17:04.735242 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-v4m7x_2d3514fc-34cd-4021-a4d9-662abe6bb56e/manager/0.log" Feb 02 12:17:04 crc kubenswrapper[4925]: I0202 12:17:04.754546 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-f9rbf_6db50ed1-76a9-48ad-b08e-07edd9d07421/manager/0.log" Feb 02 12:17:04 crc kubenswrapper[4925]: I0202 12:17:04.990514 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-8nf8m_7b8e50f8-9611-4be4-aa4e-a0834ec27a24/manager/0.log" Feb 02 12:17:05 crc kubenswrapper[4925]: I0202 12:17:05.552839 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-zksqs_85d89138-ff2c-4e69-bd55-bf6b2648d286/manager/0.log" Feb 02 12:17:05 crc kubenswrapper[4925]: I0202 12:17:05.591787 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-bfkmp_252fe85c-1645-4a4b-bd66-efe5814e9b09/manager/0.log" Feb 02 12:17:05 crc kubenswrapper[4925]: I0202 12:17:05.856637 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8_a4e64115-b62c-421f-8072-88fc52eef59e/manager/0.log" Feb 02 12:17:06 crc kubenswrapper[4925]: I0202 12:17:06.010727 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7bfc86c845-8crkz_f0498a78-8295-4910-bf25-61219ef0105c/operator/0.log" Feb 02 12:17:06 crc kubenswrapper[4925]: I0202 12:17:06.143338 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-grspq_f66c6d9e-dc09-4ffc-af2b-672b8406c132/registry-server/0.log" Feb 02 12:17:06 crc kubenswrapper[4925]: I0202 12:17:06.429944 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-5rz7t_e11ef3f5-cbad-483b-a5a6-dedfb5ec556f/manager/0.log" Feb 02 12:17:06 crc kubenswrapper[4925]: I0202 12:17:06.469667 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-zrg4p_88bf0458-e0ab-4b1b-ad4d-01e0f51780e8/manager/0.log" Feb 02 12:17:06 crc kubenswrapper[4925]: I0202 12:17:06.756118 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-4lhnh_fc69d485-23dc-4c0c-88ef-9fc6729d977d/manager/0.log" Feb 02 12:17:06 crc kubenswrapper[4925]: I0202 12:17:06.772463 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vw6m6_21d85aaf-29ca-4cc9-8831-bb5691bc29d9/operator/0.log" Feb 02 12:17:07 crc kubenswrapper[4925]: I0202 12:17:07.008422 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-8mpnq_07bdcdf5-a330-4524-9695-d089c2fbd4ae/manager/0.log" Feb 02 12:17:07 crc kubenswrapper[4925]: I0202 12:17:07.065903 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-k579v_ae37dc52-0e8c-41b3-9c07-7ce321c5e2a0/manager/0.log" Feb 02 12:17:07 crc kubenswrapper[4925]: I0202 12:17:07.188685 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-gbm72_2ce3d469-8592-45c6-aba0-f1a607694c6d/manager/0.log" Feb 02 12:17:07 crc kubenswrapper[4925]: I0202 12:17:07.519258 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5d4f579c97-rrqkc_7112b3b6-a74c-4a93-94a2-8cbdbfd960b0/manager/0.log" Feb 02 12:17:27 crc kubenswrapper[4925]: I0202 12:17:27.021118 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-ghpq7_693d8818-a349-4e21-80cd-26caca3271b5/control-plane-machine-set-operator/0.log" Feb 02 12:17:27 crc kubenswrapper[4925]: I0202 12:17:27.188729 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cmf26_5da7ca31-35e0-47b3-a877-63d50ed68d70/kube-rbac-proxy/0.log" Feb 02 12:17:27 crc kubenswrapper[4925]: I0202 12:17:27.192582 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cmf26_5da7ca31-35e0-47b3-a877-63d50ed68d70/machine-api-operator/0.log" Feb 02 12:17:39 crc kubenswrapper[4925]: I0202 12:17:39.030370 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2kqvb_06d7c0c7-2b68-478e-8113-abae661d30f6/cert-manager-controller/0.log" Feb 02 12:17:39 crc kubenswrapper[4925]: I0202 12:17:39.292859 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-9bcc7_b0cdbe98-e1d1-4844-a567-695916cc41f0/cert-manager-webhook/0.log" Feb 02 12:17:39 crc kubenswrapper[4925]: I0202 12:17:39.301796 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-2l2bm_8a78553e-d9dd-4f70-a1c2-ae2ebd4d01bd/cert-manager-cainjector/0.log" Feb 02 12:17:43 crc kubenswrapper[4925]: I0202 12:17:43.398640 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:17:43 crc kubenswrapper[4925]: I0202 12:17:43.400920 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:17:53 crc kubenswrapper[4925]: I0202 12:17:53.119508 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-wvpzr_fdf9fdc0-d0bc-48eb-881f-9f053560d16d/nmstate-console-plugin/0.log" Feb 02 12:17:53 crc kubenswrapper[4925]: I0202 12:17:53.332160 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-26h6v_658d0400-3726-4797-a477-8d95c17ccd3a/nmstate-handler/0.log" Feb 02 12:17:53 crc kubenswrapper[4925]: I0202 12:17:53.420363 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hzmqd_3d287bf3-d7ef-4ccf-ad54-c56563a8092c/nmstate-metrics/0.log" Feb 02 12:17:53 crc kubenswrapper[4925]: I0202 12:17:53.435923 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hzmqd_3d287bf3-d7ef-4ccf-ad54-c56563a8092c/kube-rbac-proxy/0.log" Feb 02 12:17:53 crc kubenswrapper[4925]: I0202 12:17:53.552472 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-tm82s_fd2e1ecb-2c35-4496-8679-da6345ee07a2/nmstate-operator/0.log" Feb 02 12:17:53 crc kubenswrapper[4925]: I0202 12:17:53.618447 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-j84xr_21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd/nmstate-webhook/0.log" Feb 02 12:18:13 crc kubenswrapper[4925]: I0202 12:18:13.398317 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:18:13 crc kubenswrapper[4925]: I0202 12:18:13.399007 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:18:19 crc kubenswrapper[4925]: I0202 12:18:19.219406 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-t7z6x_09785fed-de18-4a9b-b32f-8a3644ede917/kube-rbac-proxy/0.log" Feb 02 12:18:19 crc kubenswrapper[4925]: I0202 12:18:19.353560 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-t7z6x_09785fed-de18-4a9b-b32f-8a3644ede917/controller/0.log" Feb 02 12:18:19 crc kubenswrapper[4925]: I0202 12:18:19.443601 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-frr-files/0.log" Feb 02 12:18:19 crc kubenswrapper[4925]: I0202 12:18:19.551849 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-frr-files/0.log" Feb 02 12:18:19 crc kubenswrapper[4925]: I0202 12:18:19.589894 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-reloader/0.log" Feb 02 12:18:19 crc kubenswrapper[4925]: I0202 12:18:19.589908 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-metrics/0.log" Feb 02 12:18:19 crc kubenswrapper[4925]: I0202 12:18:19.658949 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-reloader/0.log" Feb 02 12:18:19 crc kubenswrapper[4925]: I0202 12:18:19.825032 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-reloader/0.log" Feb 02 12:18:19 crc kubenswrapper[4925]: I0202 12:18:19.843853 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-frr-files/0.log" Feb 02 12:18:19 crc kubenswrapper[4925]: I0202 12:18:19.902270 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-metrics/0.log" Feb 02 12:18:19 crc kubenswrapper[4925]: I0202 12:18:19.909138 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-metrics/0.log" Feb 02 12:18:20 crc kubenswrapper[4925]: I0202 12:18:20.032054 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-reloader/0.log" Feb 02 12:18:20 crc kubenswrapper[4925]: I0202 12:18:20.046942 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-frr-files/0.log" Feb 02 12:18:20 crc kubenswrapper[4925]: I0202 12:18:20.069440 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-metrics/0.log" Feb 02 12:18:20 crc kubenswrapper[4925]: I0202 12:18:20.071888 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/controller/0.log" Feb 02 12:18:20 crc kubenswrapper[4925]: I0202 12:18:20.234705 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/frr-metrics/0.log" Feb 02 12:18:20 crc kubenswrapper[4925]: I0202 12:18:20.235617 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/kube-rbac-proxy/0.log" Feb 02 12:18:20 crc kubenswrapper[4925]: I0202 12:18:20.285449 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/kube-rbac-proxy-frr/0.log" Feb 02 12:18:20 crc kubenswrapper[4925]: I0202 12:18:20.418832 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/reloader/0.log" Feb 02 12:18:20 crc kubenswrapper[4925]: I0202 12:18:20.492449 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-5fpmg_fa78a5ba-04ae-4ff3-85f1-6c95530e3ff2/frr-k8s-webhook-server/0.log" Feb 02 12:18:20 crc kubenswrapper[4925]: I0202 12:18:20.812964 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7c47d49988-6g6jm_5f1c0635-1bd7-4997-b0bd-5f57e7bd2893/manager/0.log" Feb 02 12:18:20 crc kubenswrapper[4925]: I0202 12:18:20.894529 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5754578b6f-nb2dq_0876b510-fef0-4243-b650-8369e62c4a93/webhook-server/0.log" Feb 02 12:18:21 crc kubenswrapper[4925]: I0202 12:18:21.007439 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dqhvw_263f4c60-783f-4109-bcf6-cbdd5e03ec0e/kube-rbac-proxy/0.log" Feb 02 12:18:21 crc kubenswrapper[4925]: I0202 12:18:21.560931 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dqhvw_263f4c60-783f-4109-bcf6-cbdd5e03ec0e/speaker/0.log" Feb 02 12:18:21 crc kubenswrapper[4925]: I0202 12:18:21.904774 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/frr/0.log" Feb 02 12:18:33 crc kubenswrapper[4925]: I0202 12:18:33.638616 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns_4331a7b0-93b0-40b7-9b53-77b0664942b8/util/0.log" Feb 02 12:18:33 crc kubenswrapper[4925]: I0202 12:18:33.918673 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns_4331a7b0-93b0-40b7-9b53-77b0664942b8/util/0.log" Feb 02 12:18:33 crc kubenswrapper[4925]: I0202 12:18:33.921014 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns_4331a7b0-93b0-40b7-9b53-77b0664942b8/pull/0.log" Feb 02 12:18:33 crc kubenswrapper[4925]: I0202 12:18:33.940147 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns_4331a7b0-93b0-40b7-9b53-77b0664942b8/pull/0.log" Feb 02 12:18:34 crc kubenswrapper[4925]: I0202 12:18:34.102961 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns_4331a7b0-93b0-40b7-9b53-77b0664942b8/util/0.log" Feb 02 12:18:34 crc kubenswrapper[4925]: I0202 12:18:34.114163 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns_4331a7b0-93b0-40b7-9b53-77b0664942b8/extract/0.log" Feb 02 12:18:34 crc kubenswrapper[4925]: I0202 12:18:34.119770 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns_4331a7b0-93b0-40b7-9b53-77b0664942b8/pull/0.log" Feb 02 12:18:34 crc kubenswrapper[4925]: I0202 12:18:34.274640 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_6643842e-f888-4afc-ac1c-c2e7ef17360d/util/0.log" Feb 02 12:18:34 crc kubenswrapper[4925]: I0202 12:18:34.459588 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_6643842e-f888-4afc-ac1c-c2e7ef17360d/util/0.log" Feb 02 12:18:34 crc kubenswrapper[4925]: I0202 12:18:34.460593 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_6643842e-f888-4afc-ac1c-c2e7ef17360d/pull/0.log" Feb 02 12:18:34 crc kubenswrapper[4925]: I0202 12:18:34.466712 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_6643842e-f888-4afc-ac1c-c2e7ef17360d/pull/0.log" Feb 02 12:18:34 crc kubenswrapper[4925]: I0202 12:18:34.611934 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_6643842e-f888-4afc-ac1c-c2e7ef17360d/util/0.log" Feb 02 12:18:34 crc kubenswrapper[4925]: I0202 12:18:34.639618 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_6643842e-f888-4afc-ac1c-c2e7ef17360d/pull/0.log" Feb 02 12:18:34 crc kubenswrapper[4925]: I0202 12:18:34.648542 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_6643842e-f888-4afc-ac1c-c2e7ef17360d/extract/0.log" Feb 02 12:18:34 crc kubenswrapper[4925]: I0202 12:18:34.794456 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2f6j4_23cca3fd-3790-4add-a724-50721c42fe9d/extract-utilities/0.log" Feb 02 12:18:34 crc kubenswrapper[4925]: I0202 12:18:34.959400 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2f6j4_23cca3fd-3790-4add-a724-50721c42fe9d/extract-utilities/0.log" Feb 02 12:18:34 crc kubenswrapper[4925]: I0202 12:18:34.967719 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2f6j4_23cca3fd-3790-4add-a724-50721c42fe9d/extract-content/0.log" Feb 02 12:18:35 crc kubenswrapper[4925]: I0202 12:18:35.002455 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2f6j4_23cca3fd-3790-4add-a724-50721c42fe9d/extract-content/0.log" Feb 02 12:18:35 crc kubenswrapper[4925]: I0202 12:18:35.161994 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2f6j4_23cca3fd-3790-4add-a724-50721c42fe9d/extract-utilities/0.log" Feb 02 12:18:35 crc kubenswrapper[4925]: I0202 12:18:35.165931 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2f6j4_23cca3fd-3790-4add-a724-50721c42fe9d/extract-content/0.log" Feb 02 12:18:35 crc kubenswrapper[4925]: I0202 12:18:35.383402 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6r7t_7094aa75-75ce-4d8b-b2be-dd34f846d5fe/extract-utilities/0.log" Feb 02 12:18:35 crc kubenswrapper[4925]: I0202 12:18:35.645624 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6r7t_7094aa75-75ce-4d8b-b2be-dd34f846d5fe/extract-utilities/0.log" Feb 02 12:18:35 crc kubenswrapper[4925]: I0202 12:18:35.663177 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6r7t_7094aa75-75ce-4d8b-b2be-dd34f846d5fe/extract-content/0.log" Feb 02 12:18:35 crc kubenswrapper[4925]: I0202 12:18:35.681131 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6r7t_7094aa75-75ce-4d8b-b2be-dd34f846d5fe/extract-content/0.log" Feb 02 12:18:35 crc kubenswrapper[4925]: I0202 12:18:35.831275 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2f6j4_23cca3fd-3790-4add-a724-50721c42fe9d/registry-server/0.log" Feb 02 12:18:35 crc kubenswrapper[4925]: I0202 12:18:35.880660 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6r7t_7094aa75-75ce-4d8b-b2be-dd34f846d5fe/extract-content/0.log" Feb 02 12:18:35 crc kubenswrapper[4925]: I0202 12:18:35.880789 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6r7t_7094aa75-75ce-4d8b-b2be-dd34f846d5fe/extract-utilities/0.log" Feb 02 12:18:36 crc kubenswrapper[4925]: I0202 12:18:36.185653 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6p5nd_7ed2f286-6b23-4789-9f42-9da9d276812e/marketplace-operator/0.log" Feb 02 12:18:36 crc kubenswrapper[4925]: I0202 12:18:36.338557 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5w4mz_25f59ff6-4459-41ea-ab79-373c701ffcc3/extract-utilities/0.log" Feb 02 12:18:36 crc kubenswrapper[4925]: I0202 12:18:36.490764 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6r7t_7094aa75-75ce-4d8b-b2be-dd34f846d5fe/registry-server/0.log" Feb 02 12:18:36 crc kubenswrapper[4925]: I0202 12:18:36.532505 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5w4mz_25f59ff6-4459-41ea-ab79-373c701ffcc3/extract-utilities/0.log" Feb 02 12:18:36 crc kubenswrapper[4925]: I0202 12:18:36.576541 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5w4mz_25f59ff6-4459-41ea-ab79-373c701ffcc3/extract-content/0.log" Feb 02 12:18:36 crc kubenswrapper[4925]: I0202 12:18:36.579876 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5w4mz_25f59ff6-4459-41ea-ab79-373c701ffcc3/extract-content/0.log" Feb 02 12:18:36 crc kubenswrapper[4925]: I0202 12:18:36.737646 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5w4mz_25f59ff6-4459-41ea-ab79-373c701ffcc3/extract-content/0.log" Feb 02 12:18:36 crc kubenswrapper[4925]: I0202 12:18:36.743019 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5w4mz_25f59ff6-4459-41ea-ab79-373c701ffcc3/extract-utilities/0.log" Feb 02 12:18:36 crc kubenswrapper[4925]: I0202 12:18:36.937343 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z4qt5_5928e7ce-0012-48b1-9187-d35097e13692/extract-utilities/0.log" Feb 02 12:18:36 crc kubenswrapper[4925]: I0202 12:18:36.944811 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5w4mz_25f59ff6-4459-41ea-ab79-373c701ffcc3/registry-server/0.log" Feb 02 12:18:37 crc kubenswrapper[4925]: I0202 12:18:37.173241 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z4qt5_5928e7ce-0012-48b1-9187-d35097e13692/extract-content/0.log" Feb 02 12:18:37 crc kubenswrapper[4925]: I0202 12:18:37.205450 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z4qt5_5928e7ce-0012-48b1-9187-d35097e13692/extract-content/0.log" Feb 02 12:18:37 crc kubenswrapper[4925]: I0202 12:18:37.212914 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z4qt5_5928e7ce-0012-48b1-9187-d35097e13692/extract-utilities/0.log" Feb 02 12:18:37 crc kubenswrapper[4925]: I0202 12:18:37.393561 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z4qt5_5928e7ce-0012-48b1-9187-d35097e13692/extract-content/0.log" Feb 02 12:18:37 crc kubenswrapper[4925]: I0202 12:18:37.421855 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z4qt5_5928e7ce-0012-48b1-9187-d35097e13692/extract-utilities/0.log" Feb 02 12:18:37 crc kubenswrapper[4925]: I0202 12:18:37.934202 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z4qt5_5928e7ce-0012-48b1-9187-d35097e13692/registry-server/0.log" Feb 02 12:18:43 crc kubenswrapper[4925]: I0202 12:18:43.398460 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:18:43 crc kubenswrapper[4925]: I0202 12:18:43.399337 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:18:43 crc kubenswrapper[4925]: I0202 12:18:43.399389 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 12:18:43 crc kubenswrapper[4925]: I0202 12:18:43.400469 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5d69f58afc7bdb04bc664b8e1f93d1bef487fdc416cef9285aac62e414c7431"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 12:18:43 crc kubenswrapper[4925]: I0202 12:18:43.400530 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://c5d69f58afc7bdb04bc664b8e1f93d1bef487fdc416cef9285aac62e414c7431" gracePeriod=600 Feb 02 12:18:43 crc kubenswrapper[4925]: I0202 12:18:43.974553 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="c5d69f58afc7bdb04bc664b8e1f93d1bef487fdc416cef9285aac62e414c7431" exitCode=0 Feb 02 12:18:43 crc kubenswrapper[4925]: I0202 12:18:43.974612 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"c5d69f58afc7bdb04bc664b8e1f93d1bef487fdc416cef9285aac62e414c7431"} Feb 02 12:18:43 crc kubenswrapper[4925]: I0202 12:18:43.974872 4925 scope.go:117] "RemoveContainer" containerID="3c5ad918a2748087997612d5fe35662c2faf9c5cb4a0d703c4935d15c722f282" Feb 02 12:18:44 crc kubenswrapper[4925]: I0202 12:18:44.987058 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5"} Feb 02 12:19:59 crc kubenswrapper[4925]: I0202 12:19:59.656840 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xdjpb"] Feb 02 12:19:59 crc kubenswrapper[4925]: E0202 12:19:59.657936 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66739257-570b-4632-803f-6cf8fb5a1467" containerName="registry-server" Feb 02 12:19:59 crc kubenswrapper[4925]: I0202 12:19:59.657956 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="66739257-570b-4632-803f-6cf8fb5a1467" containerName="registry-server" Feb 02 12:19:59 crc kubenswrapper[4925]: E0202 12:19:59.657981 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66739257-570b-4632-803f-6cf8fb5a1467" containerName="extract-content" Feb 02 12:19:59 crc kubenswrapper[4925]: I0202 12:19:59.657992 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="66739257-570b-4632-803f-6cf8fb5a1467" containerName="extract-content" Feb 02 12:19:59 crc kubenswrapper[4925]: E0202 12:19:59.658022 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66739257-570b-4632-803f-6cf8fb5a1467" containerName="extract-utilities" Feb 02 12:19:59 crc kubenswrapper[4925]: I0202 12:19:59.658031 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="66739257-570b-4632-803f-6cf8fb5a1467" containerName="extract-utilities" Feb 02 12:19:59 crc kubenswrapper[4925]: I0202 12:19:59.658251 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="66739257-570b-4632-803f-6cf8fb5a1467" containerName="registry-server" Feb 02 12:19:59 crc kubenswrapper[4925]: I0202 12:19:59.659767 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdjpb" Feb 02 12:19:59 crc kubenswrapper[4925]: I0202 12:19:59.671029 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xdjpb"] Feb 02 12:19:59 crc kubenswrapper[4925]: I0202 12:19:59.682570 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcb08437-fc63-4173-95ea-76c445338d3d-catalog-content\") pod \"community-operators-xdjpb\" (UID: \"dcb08437-fc63-4173-95ea-76c445338d3d\") " pod="openshift-marketplace/community-operators-xdjpb" Feb 02 12:19:59 crc kubenswrapper[4925]: I0202 12:19:59.682838 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kxnm\" (UniqueName: \"kubernetes.io/projected/dcb08437-fc63-4173-95ea-76c445338d3d-kube-api-access-7kxnm\") pod \"community-operators-xdjpb\" (UID: \"dcb08437-fc63-4173-95ea-76c445338d3d\") " pod="openshift-marketplace/community-operators-xdjpb" Feb 02 12:19:59 crc kubenswrapper[4925]: I0202 12:19:59.682934 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcb08437-fc63-4173-95ea-76c445338d3d-utilities\") pod \"community-operators-xdjpb\" (UID: \"dcb08437-fc63-4173-95ea-76c445338d3d\") " pod="openshift-marketplace/community-operators-xdjpb" Feb 02 12:19:59 crc kubenswrapper[4925]: I0202 12:19:59.785507 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcb08437-fc63-4173-95ea-76c445338d3d-catalog-content\") pod \"community-operators-xdjpb\" (UID: \"dcb08437-fc63-4173-95ea-76c445338d3d\") " pod="openshift-marketplace/community-operators-xdjpb" Feb 02 12:19:59 crc kubenswrapper[4925]: I0202 12:19:59.786223 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcb08437-fc63-4173-95ea-76c445338d3d-catalog-content\") pod \"community-operators-xdjpb\" (UID: \"dcb08437-fc63-4173-95ea-76c445338d3d\") " pod="openshift-marketplace/community-operators-xdjpb" Feb 02 12:19:59 crc kubenswrapper[4925]: I0202 12:19:59.786540 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kxnm\" (UniqueName: \"kubernetes.io/projected/dcb08437-fc63-4173-95ea-76c445338d3d-kube-api-access-7kxnm\") pod \"community-operators-xdjpb\" (UID: \"dcb08437-fc63-4173-95ea-76c445338d3d\") " pod="openshift-marketplace/community-operators-xdjpb" Feb 02 12:19:59 crc kubenswrapper[4925]: I0202 12:19:59.786682 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcb08437-fc63-4173-95ea-76c445338d3d-utilities\") pod \"community-operators-xdjpb\" (UID: \"dcb08437-fc63-4173-95ea-76c445338d3d\") " pod="openshift-marketplace/community-operators-xdjpb" Feb 02 12:19:59 crc kubenswrapper[4925]: I0202 12:19:59.787039 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcb08437-fc63-4173-95ea-76c445338d3d-utilities\") pod \"community-operators-xdjpb\" (UID: \"dcb08437-fc63-4173-95ea-76c445338d3d\") " pod="openshift-marketplace/community-operators-xdjpb" Feb 02 12:19:59 crc kubenswrapper[4925]: I0202 12:19:59.808731 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kxnm\" (UniqueName: \"kubernetes.io/projected/dcb08437-fc63-4173-95ea-76c445338d3d-kube-api-access-7kxnm\") pod \"community-operators-xdjpb\" (UID: \"dcb08437-fc63-4173-95ea-76c445338d3d\") " pod="openshift-marketplace/community-operators-xdjpb" Feb 02 12:19:59 crc kubenswrapper[4925]: I0202 12:19:59.984699 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdjpb" Feb 02 12:20:00 crc kubenswrapper[4925]: I0202 12:20:00.593570 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xdjpb"] Feb 02 12:20:01 crc kubenswrapper[4925]: I0202 12:20:01.008559 4925 generic.go:334] "Generic (PLEG): container finished" podID="dcb08437-fc63-4173-95ea-76c445338d3d" containerID="5e69b88023c3c45deb7c998ac4f2ebdfd2219b0090da1f4ffb8cf38b56cf85a3" exitCode=0 Feb 02 12:20:01 crc kubenswrapper[4925]: I0202 12:20:01.008641 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdjpb" event={"ID":"dcb08437-fc63-4173-95ea-76c445338d3d","Type":"ContainerDied","Data":"5e69b88023c3c45deb7c998ac4f2ebdfd2219b0090da1f4ffb8cf38b56cf85a3"} Feb 02 12:20:01 crc kubenswrapper[4925]: I0202 12:20:01.008926 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdjpb" event={"ID":"dcb08437-fc63-4173-95ea-76c445338d3d","Type":"ContainerStarted","Data":"ff31e1172c8c105a16b5405844c86db1e6b1ed8300a622f0be8c3a5f717b259c"} Feb 02 12:20:03 crc kubenswrapper[4925]: I0202 12:20:03.026338 4925 generic.go:334] "Generic (PLEG): container finished" podID="dcb08437-fc63-4173-95ea-76c445338d3d" containerID="1ca4246d2f7291889ac40cc8970439005b3ddb2108fc4ae78ad1e739889fc29e" exitCode=0 Feb 02 12:20:03 crc kubenswrapper[4925]: I0202 12:20:03.026852 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdjpb" event={"ID":"dcb08437-fc63-4173-95ea-76c445338d3d","Type":"ContainerDied","Data":"1ca4246d2f7291889ac40cc8970439005b3ddb2108fc4ae78ad1e739889fc29e"} Feb 02 12:20:04 crc kubenswrapper[4925]: I0202 12:20:04.038140 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdjpb" event={"ID":"dcb08437-fc63-4173-95ea-76c445338d3d","Type":"ContainerStarted","Data":"aefd07574f1fd7f6d8f062b581d5612cdf68927a5df18b02d459ca0b46d2c353"} Feb 02 12:20:04 crc kubenswrapper[4925]: I0202 12:20:04.055624 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xdjpb" podStartSLOduration=2.6333785990000003 podStartE2EDuration="5.055608521s" podCreationTimestamp="2026-02-02 12:19:59 +0000 UTC" firstStartedPulling="2026-02-02 12:20:01.010693828 +0000 UTC m=+4978.014942790" lastFinishedPulling="2026-02-02 12:20:03.43292375 +0000 UTC m=+4980.437172712" observedRunningTime="2026-02-02 12:20:04.054030848 +0000 UTC m=+4981.058279830" watchObservedRunningTime="2026-02-02 12:20:04.055608521 +0000 UTC m=+4981.059857473" Feb 02 12:20:09 crc kubenswrapper[4925]: I0202 12:20:09.985119 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xdjpb" Feb 02 12:20:09 crc kubenswrapper[4925]: I0202 12:20:09.985801 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xdjpb" Feb 02 12:20:10 crc kubenswrapper[4925]: I0202 12:20:10.104193 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xdjpb" Feb 02 12:20:10 crc kubenswrapper[4925]: I0202 12:20:10.156045 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xdjpb" Feb 02 12:20:10 crc kubenswrapper[4925]: I0202 12:20:10.341236 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xdjpb"] Feb 02 12:20:12 crc kubenswrapper[4925]: I0202 12:20:12.113068 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xdjpb" podUID="dcb08437-fc63-4173-95ea-76c445338d3d" containerName="registry-server" containerID="cri-o://aefd07574f1fd7f6d8f062b581d5612cdf68927a5df18b02d459ca0b46d2c353" gracePeriod=2 Feb 02 12:20:12 crc kubenswrapper[4925]: I0202 12:20:12.838804 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdjpb" Feb 02 12:20:12 crc kubenswrapper[4925]: I0202 12:20:12.847913 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kxnm\" (UniqueName: \"kubernetes.io/projected/dcb08437-fc63-4173-95ea-76c445338d3d-kube-api-access-7kxnm\") pod \"dcb08437-fc63-4173-95ea-76c445338d3d\" (UID: \"dcb08437-fc63-4173-95ea-76c445338d3d\") " Feb 02 12:20:12 crc kubenswrapper[4925]: I0202 12:20:12.848089 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcb08437-fc63-4173-95ea-76c445338d3d-catalog-content\") pod \"dcb08437-fc63-4173-95ea-76c445338d3d\" (UID: \"dcb08437-fc63-4173-95ea-76c445338d3d\") " Feb 02 12:20:12 crc kubenswrapper[4925]: I0202 12:20:12.848127 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcb08437-fc63-4173-95ea-76c445338d3d-utilities\") pod \"dcb08437-fc63-4173-95ea-76c445338d3d\" (UID: \"dcb08437-fc63-4173-95ea-76c445338d3d\") " Feb 02 12:20:12 crc kubenswrapper[4925]: I0202 12:20:12.849052 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcb08437-fc63-4173-95ea-76c445338d3d-utilities" (OuterVolumeSpecName: "utilities") pod "dcb08437-fc63-4173-95ea-76c445338d3d" (UID: "dcb08437-fc63-4173-95ea-76c445338d3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:20:12 crc kubenswrapper[4925]: I0202 12:20:12.864722 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcb08437-fc63-4173-95ea-76c445338d3d-kube-api-access-7kxnm" (OuterVolumeSpecName: "kube-api-access-7kxnm") pod "dcb08437-fc63-4173-95ea-76c445338d3d" (UID: "dcb08437-fc63-4173-95ea-76c445338d3d"). InnerVolumeSpecName "kube-api-access-7kxnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:20:12 crc kubenswrapper[4925]: I0202 12:20:12.950351 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcb08437-fc63-4173-95ea-76c445338d3d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 12:20:12 crc kubenswrapper[4925]: I0202 12:20:12.950386 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kxnm\" (UniqueName: \"kubernetes.io/projected/dcb08437-fc63-4173-95ea-76c445338d3d-kube-api-access-7kxnm\") on node \"crc\" DevicePath \"\"" Feb 02 12:20:13 crc kubenswrapper[4925]: I0202 12:20:13.053710 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcb08437-fc63-4173-95ea-76c445338d3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcb08437-fc63-4173-95ea-76c445338d3d" (UID: "dcb08437-fc63-4173-95ea-76c445338d3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:20:13 crc kubenswrapper[4925]: I0202 12:20:13.125220 4925 generic.go:334] "Generic (PLEG): container finished" podID="dcb08437-fc63-4173-95ea-76c445338d3d" containerID="aefd07574f1fd7f6d8f062b581d5612cdf68927a5df18b02d459ca0b46d2c353" exitCode=0 Feb 02 12:20:13 crc kubenswrapper[4925]: I0202 12:20:13.125274 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdjpb" event={"ID":"dcb08437-fc63-4173-95ea-76c445338d3d","Type":"ContainerDied","Data":"aefd07574f1fd7f6d8f062b581d5612cdf68927a5df18b02d459ca0b46d2c353"} Feb 02 12:20:13 crc kubenswrapper[4925]: I0202 12:20:13.125312 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdjpb" event={"ID":"dcb08437-fc63-4173-95ea-76c445338d3d","Type":"ContainerDied","Data":"ff31e1172c8c105a16b5405844c86db1e6b1ed8300a622f0be8c3a5f717b259c"} Feb 02 12:20:13 crc kubenswrapper[4925]: I0202 12:20:13.125329 4925 scope.go:117] "RemoveContainer" containerID="aefd07574f1fd7f6d8f062b581d5612cdf68927a5df18b02d459ca0b46d2c353" Feb 02 12:20:13 crc kubenswrapper[4925]: I0202 12:20:13.125344 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdjpb" Feb 02 12:20:13 crc kubenswrapper[4925]: I0202 12:20:13.153719 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcb08437-fc63-4173-95ea-76c445338d3d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 12:20:13 crc kubenswrapper[4925]: I0202 12:20:13.155069 4925 scope.go:117] "RemoveContainer" containerID="1ca4246d2f7291889ac40cc8970439005b3ddb2108fc4ae78ad1e739889fc29e" Feb 02 12:20:13 crc kubenswrapper[4925]: I0202 12:20:13.198346 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xdjpb"] Feb 02 12:20:13 crc kubenswrapper[4925]: I0202 12:20:13.210054 4925 scope.go:117] "RemoveContainer" containerID="5e69b88023c3c45deb7c998ac4f2ebdfd2219b0090da1f4ffb8cf38b56cf85a3" Feb 02 12:20:13 crc kubenswrapper[4925]: I0202 12:20:13.210932 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xdjpb"] Feb 02 12:20:13 crc kubenswrapper[4925]: I0202 12:20:13.249052 4925 scope.go:117] "RemoveContainer" containerID="aefd07574f1fd7f6d8f062b581d5612cdf68927a5df18b02d459ca0b46d2c353" Feb 02 12:20:13 crc kubenswrapper[4925]: E0202 12:20:13.249689 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aefd07574f1fd7f6d8f062b581d5612cdf68927a5df18b02d459ca0b46d2c353\": container with ID starting with aefd07574f1fd7f6d8f062b581d5612cdf68927a5df18b02d459ca0b46d2c353 not found: ID does not exist" containerID="aefd07574f1fd7f6d8f062b581d5612cdf68927a5df18b02d459ca0b46d2c353" Feb 02 12:20:13 crc kubenswrapper[4925]: I0202 12:20:13.249737 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aefd07574f1fd7f6d8f062b581d5612cdf68927a5df18b02d459ca0b46d2c353"} err="failed to get container status \"aefd07574f1fd7f6d8f062b581d5612cdf68927a5df18b02d459ca0b46d2c353\": rpc error: code = NotFound desc = could not find container \"aefd07574f1fd7f6d8f062b581d5612cdf68927a5df18b02d459ca0b46d2c353\": container with ID starting with aefd07574f1fd7f6d8f062b581d5612cdf68927a5df18b02d459ca0b46d2c353 not found: ID does not exist" Feb 02 12:20:13 crc kubenswrapper[4925]: I0202 12:20:13.249769 4925 scope.go:117] "RemoveContainer" containerID="1ca4246d2f7291889ac40cc8970439005b3ddb2108fc4ae78ad1e739889fc29e" Feb 02 12:20:13 crc kubenswrapper[4925]: E0202 12:20:13.250306 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ca4246d2f7291889ac40cc8970439005b3ddb2108fc4ae78ad1e739889fc29e\": container with ID starting with 1ca4246d2f7291889ac40cc8970439005b3ddb2108fc4ae78ad1e739889fc29e not found: ID does not exist" containerID="1ca4246d2f7291889ac40cc8970439005b3ddb2108fc4ae78ad1e739889fc29e" Feb 02 12:20:13 crc kubenswrapper[4925]: I0202 12:20:13.250357 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ca4246d2f7291889ac40cc8970439005b3ddb2108fc4ae78ad1e739889fc29e"} err="failed to get container status \"1ca4246d2f7291889ac40cc8970439005b3ddb2108fc4ae78ad1e739889fc29e\": rpc error: code = NotFound desc = could not find container \"1ca4246d2f7291889ac40cc8970439005b3ddb2108fc4ae78ad1e739889fc29e\": container with ID starting with 1ca4246d2f7291889ac40cc8970439005b3ddb2108fc4ae78ad1e739889fc29e not found: ID does not exist" Feb 02 12:20:13 crc kubenswrapper[4925]: I0202 12:20:13.250386 4925 scope.go:117] "RemoveContainer" containerID="5e69b88023c3c45deb7c998ac4f2ebdfd2219b0090da1f4ffb8cf38b56cf85a3" Feb 02 12:20:13 crc kubenswrapper[4925]: E0202 12:20:13.250920 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e69b88023c3c45deb7c998ac4f2ebdfd2219b0090da1f4ffb8cf38b56cf85a3\": container with ID starting with 5e69b88023c3c45deb7c998ac4f2ebdfd2219b0090da1f4ffb8cf38b56cf85a3 not found: ID does not exist" containerID="5e69b88023c3c45deb7c998ac4f2ebdfd2219b0090da1f4ffb8cf38b56cf85a3" Feb 02 12:20:13 crc kubenswrapper[4925]: I0202 12:20:13.250944 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e69b88023c3c45deb7c998ac4f2ebdfd2219b0090da1f4ffb8cf38b56cf85a3"} err="failed to get container status \"5e69b88023c3c45deb7c998ac4f2ebdfd2219b0090da1f4ffb8cf38b56cf85a3\": rpc error: code = NotFound desc = could not find container \"5e69b88023c3c45deb7c998ac4f2ebdfd2219b0090da1f4ffb8cf38b56cf85a3\": container with ID starting with 5e69b88023c3c45deb7c998ac4f2ebdfd2219b0090da1f4ffb8cf38b56cf85a3 not found: ID does not exist" Feb 02 12:20:14 crc kubenswrapper[4925]: I0202 12:20:14.675742 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcb08437-fc63-4173-95ea-76c445338d3d" path="/var/lib/kubelet/pods/dcb08437-fc63-4173-95ea-76c445338d3d/volumes" Feb 02 12:20:46 crc kubenswrapper[4925]: I0202 12:20:46.433941 4925 generic.go:334] "Generic (PLEG): container finished" podID="9ed232c0-fe5f-4069-9cf4-adfe339d2da4" containerID="b04e863f8f70627a36ab0158b205fbfdc6fa66c0485d1c65cbb3517c7a29a74c" exitCode=0 Feb 02 12:20:46 crc kubenswrapper[4925]: I0202 12:20:46.434051 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-s7vgc/must-gather-6dbzg" event={"ID":"9ed232c0-fe5f-4069-9cf4-adfe339d2da4","Type":"ContainerDied","Data":"b04e863f8f70627a36ab0158b205fbfdc6fa66c0485d1c65cbb3517c7a29a74c"} Feb 02 12:20:46 crc kubenswrapper[4925]: I0202 12:20:46.435763 4925 scope.go:117] "RemoveContainer" containerID="b04e863f8f70627a36ab0158b205fbfdc6fa66c0485d1c65cbb3517c7a29a74c" Feb 02 12:20:46 crc kubenswrapper[4925]: I0202 12:20:46.765622 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-s7vgc_must-gather-6dbzg_9ed232c0-fe5f-4069-9cf4-adfe339d2da4/gather/0.log" Feb 02 12:20:55 crc kubenswrapper[4925]: I0202 12:20:55.323869 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-s7vgc/must-gather-6dbzg"] Feb 02 12:20:55 crc kubenswrapper[4925]: I0202 12:20:55.324865 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-s7vgc/must-gather-6dbzg" podUID="9ed232c0-fe5f-4069-9cf4-adfe339d2da4" containerName="copy" containerID="cri-o://70f6fd2558319e5792588bbfbaae0f8240561bbfc2927840610703e4b6c8882a" gracePeriod=2 Feb 02 12:20:55 crc kubenswrapper[4925]: I0202 12:20:55.340996 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-s7vgc/must-gather-6dbzg"] Feb 02 12:20:55 crc kubenswrapper[4925]: I0202 12:20:55.511478 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-s7vgc_must-gather-6dbzg_9ed232c0-fe5f-4069-9cf4-adfe339d2da4/copy/0.log" Feb 02 12:20:55 crc kubenswrapper[4925]: I0202 12:20:55.511987 4925 generic.go:334] "Generic (PLEG): container finished" podID="9ed232c0-fe5f-4069-9cf4-adfe339d2da4" containerID="70f6fd2558319e5792588bbfbaae0f8240561bbfc2927840610703e4b6c8882a" exitCode=143 Feb 02 12:20:55 crc kubenswrapper[4925]: I0202 12:20:55.808120 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-s7vgc_must-gather-6dbzg_9ed232c0-fe5f-4069-9cf4-adfe339d2da4/copy/0.log" Feb 02 12:20:55 crc kubenswrapper[4925]: I0202 12:20:55.809138 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-s7vgc/must-gather-6dbzg" Feb 02 12:20:55 crc kubenswrapper[4925]: I0202 12:20:55.918507 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9ed232c0-fe5f-4069-9cf4-adfe339d2da4-must-gather-output\") pod \"9ed232c0-fe5f-4069-9cf4-adfe339d2da4\" (UID: \"9ed232c0-fe5f-4069-9cf4-adfe339d2da4\") " Feb 02 12:20:55 crc kubenswrapper[4925]: I0202 12:20:55.918590 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fftkh\" (UniqueName: \"kubernetes.io/projected/9ed232c0-fe5f-4069-9cf4-adfe339d2da4-kube-api-access-fftkh\") pod \"9ed232c0-fe5f-4069-9cf4-adfe339d2da4\" (UID: \"9ed232c0-fe5f-4069-9cf4-adfe339d2da4\") " Feb 02 12:20:55 crc kubenswrapper[4925]: I0202 12:20:55.925541 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed232c0-fe5f-4069-9cf4-adfe339d2da4-kube-api-access-fftkh" (OuterVolumeSpecName: "kube-api-access-fftkh") pod "9ed232c0-fe5f-4069-9cf4-adfe339d2da4" (UID: "9ed232c0-fe5f-4069-9cf4-adfe339d2da4"). InnerVolumeSpecName "kube-api-access-fftkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:20:56 crc kubenswrapper[4925]: I0202 12:20:56.021003 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fftkh\" (UniqueName: \"kubernetes.io/projected/9ed232c0-fe5f-4069-9cf4-adfe339d2da4-kube-api-access-fftkh\") on node \"crc\" DevicePath \"\"" Feb 02 12:20:56 crc kubenswrapper[4925]: I0202 12:20:56.075838 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ed232c0-fe5f-4069-9cf4-adfe339d2da4-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9ed232c0-fe5f-4069-9cf4-adfe339d2da4" (UID: "9ed232c0-fe5f-4069-9cf4-adfe339d2da4"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:20:56 crc kubenswrapper[4925]: I0202 12:20:56.123303 4925 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9ed232c0-fe5f-4069-9cf4-adfe339d2da4-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 12:20:56 crc kubenswrapper[4925]: I0202 12:20:56.521240 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-s7vgc_must-gather-6dbzg_9ed232c0-fe5f-4069-9cf4-adfe339d2da4/copy/0.log" Feb 02 12:20:56 crc kubenswrapper[4925]: I0202 12:20:56.523191 4925 scope.go:117] "RemoveContainer" containerID="70f6fd2558319e5792588bbfbaae0f8240561bbfc2927840610703e4b6c8882a" Feb 02 12:20:56 crc kubenswrapper[4925]: I0202 12:20:56.523215 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-s7vgc/must-gather-6dbzg" Feb 02 12:20:56 crc kubenswrapper[4925]: I0202 12:20:56.544985 4925 scope.go:117] "RemoveContainer" containerID="b04e863f8f70627a36ab0158b205fbfdc6fa66c0485d1c65cbb3517c7a29a74c" Feb 02 12:20:56 crc kubenswrapper[4925]: I0202 12:20:56.676548 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ed232c0-fe5f-4069-9cf4-adfe339d2da4" path="/var/lib/kubelet/pods/9ed232c0-fe5f-4069-9cf4-adfe339d2da4/volumes" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.468622 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k2fjw"] Feb 02 12:21:05 crc kubenswrapper[4925]: E0202 12:21:05.471480 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb08437-fc63-4173-95ea-76c445338d3d" containerName="registry-server" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.471654 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb08437-fc63-4173-95ea-76c445338d3d" containerName="registry-server" Feb 02 12:21:05 crc kubenswrapper[4925]: E0202 12:21:05.471723 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed232c0-fe5f-4069-9cf4-adfe339d2da4" containerName="copy" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.471782 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed232c0-fe5f-4069-9cf4-adfe339d2da4" containerName="copy" Feb 02 12:21:05 crc kubenswrapper[4925]: E0202 12:21:05.471871 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed232c0-fe5f-4069-9cf4-adfe339d2da4" containerName="gather" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.471946 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed232c0-fe5f-4069-9cf4-adfe339d2da4" containerName="gather" Feb 02 12:21:05 crc kubenswrapper[4925]: E0202 12:21:05.472008 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb08437-fc63-4173-95ea-76c445338d3d" containerName="extract-utilities" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.472066 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb08437-fc63-4173-95ea-76c445338d3d" containerName="extract-utilities" Feb 02 12:21:05 crc kubenswrapper[4925]: E0202 12:21:05.472156 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb08437-fc63-4173-95ea-76c445338d3d" containerName="extract-content" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.472268 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb08437-fc63-4173-95ea-76c445338d3d" containerName="extract-content" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.472496 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed232c0-fe5f-4069-9cf4-adfe339d2da4" containerName="gather" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.472565 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcb08437-fc63-4173-95ea-76c445338d3d" containerName="registry-server" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.472626 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed232c0-fe5f-4069-9cf4-adfe339d2da4" containerName="copy" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.473997 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2fjw" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.491193 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2fjw"] Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.615370 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae326276-9d93-4067-98c4-292d3a626a4f-catalog-content\") pod \"redhat-operators-k2fjw\" (UID: \"ae326276-9d93-4067-98c4-292d3a626a4f\") " pod="openshift-marketplace/redhat-operators-k2fjw" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.615452 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p89l\" (UniqueName: \"kubernetes.io/projected/ae326276-9d93-4067-98c4-292d3a626a4f-kube-api-access-5p89l\") pod \"redhat-operators-k2fjw\" (UID: \"ae326276-9d93-4067-98c4-292d3a626a4f\") " pod="openshift-marketplace/redhat-operators-k2fjw" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.615491 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae326276-9d93-4067-98c4-292d3a626a4f-utilities\") pod \"redhat-operators-k2fjw\" (UID: \"ae326276-9d93-4067-98c4-292d3a626a4f\") " pod="openshift-marketplace/redhat-operators-k2fjw" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.716857 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae326276-9d93-4067-98c4-292d3a626a4f-catalog-content\") pod \"redhat-operators-k2fjw\" (UID: \"ae326276-9d93-4067-98c4-292d3a626a4f\") " pod="openshift-marketplace/redhat-operators-k2fjw" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.716979 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p89l\" (UniqueName: \"kubernetes.io/projected/ae326276-9d93-4067-98c4-292d3a626a4f-kube-api-access-5p89l\") pod \"redhat-operators-k2fjw\" (UID: \"ae326276-9d93-4067-98c4-292d3a626a4f\") " pod="openshift-marketplace/redhat-operators-k2fjw" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.717059 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae326276-9d93-4067-98c4-292d3a626a4f-utilities\") pod \"redhat-operators-k2fjw\" (UID: \"ae326276-9d93-4067-98c4-292d3a626a4f\") " pod="openshift-marketplace/redhat-operators-k2fjw" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.718753 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae326276-9d93-4067-98c4-292d3a626a4f-utilities\") pod \"redhat-operators-k2fjw\" (UID: \"ae326276-9d93-4067-98c4-292d3a626a4f\") " pod="openshift-marketplace/redhat-operators-k2fjw" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.719042 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae326276-9d93-4067-98c4-292d3a626a4f-catalog-content\") pod \"redhat-operators-k2fjw\" (UID: \"ae326276-9d93-4067-98c4-292d3a626a4f\") " pod="openshift-marketplace/redhat-operators-k2fjw" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.741936 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p89l\" (UniqueName: \"kubernetes.io/projected/ae326276-9d93-4067-98c4-292d3a626a4f-kube-api-access-5p89l\") pod \"redhat-operators-k2fjw\" (UID: \"ae326276-9d93-4067-98c4-292d3a626a4f\") " pod="openshift-marketplace/redhat-operators-k2fjw" Feb 02 12:21:05 crc kubenswrapper[4925]: I0202 12:21:05.795850 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2fjw" Feb 02 12:21:06 crc kubenswrapper[4925]: I0202 12:21:06.268914 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2fjw"] Feb 02 12:21:06 crc kubenswrapper[4925]: I0202 12:21:06.622483 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2fjw" event={"ID":"ae326276-9d93-4067-98c4-292d3a626a4f","Type":"ContainerStarted","Data":"b690f54adfda4348de56f2478ef851db911629878a88fd7db741502e08caa218"} Feb 02 12:21:07 crc kubenswrapper[4925]: I0202 12:21:07.631029 4925 generic.go:334] "Generic (PLEG): container finished" podID="ae326276-9d93-4067-98c4-292d3a626a4f" containerID="19a3774e3942848afbce1f6f31e913a35cc6dd1f9034b3ba192072ace6896ce8" exitCode=0 Feb 02 12:21:07 crc kubenswrapper[4925]: I0202 12:21:07.631097 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2fjw" event={"ID":"ae326276-9d93-4067-98c4-292d3a626a4f","Type":"ContainerDied","Data":"19a3774e3942848afbce1f6f31e913a35cc6dd1f9034b3ba192072ace6896ce8"} Feb 02 12:21:10 crc kubenswrapper[4925]: I0202 12:21:10.657627 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2fjw" event={"ID":"ae326276-9d93-4067-98c4-292d3a626a4f","Type":"ContainerStarted","Data":"36722242f107a76b2fc9844c9c1e78520d1c80c039a6fffcc66c5b2055479d2d"} Feb 02 12:21:12 crc kubenswrapper[4925]: I0202 12:21:12.677222 4925 generic.go:334] "Generic (PLEG): container finished" podID="ae326276-9d93-4067-98c4-292d3a626a4f" containerID="36722242f107a76b2fc9844c9c1e78520d1c80c039a6fffcc66c5b2055479d2d" exitCode=0 Feb 02 12:21:12 crc kubenswrapper[4925]: I0202 12:21:12.677292 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2fjw" event={"ID":"ae326276-9d93-4067-98c4-292d3a626a4f","Type":"ContainerDied","Data":"36722242f107a76b2fc9844c9c1e78520d1c80c039a6fffcc66c5b2055479d2d"} Feb 02 12:21:12 crc kubenswrapper[4925]: I0202 12:21:12.680691 4925 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 12:21:13 crc kubenswrapper[4925]: I0202 12:21:13.398412 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:21:13 crc kubenswrapper[4925]: I0202 12:21:13.398828 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:21:13 crc kubenswrapper[4925]: I0202 12:21:13.688562 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2fjw" event={"ID":"ae326276-9d93-4067-98c4-292d3a626a4f","Type":"ContainerStarted","Data":"cbb5540a8232d03913930421c789d4af3aa658f5148c8c1270573d7baa73bc43"} Feb 02 12:21:13 crc kubenswrapper[4925]: I0202 12:21:13.708684 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k2fjw" podStartSLOduration=3.241984223 podStartE2EDuration="8.708665665s" podCreationTimestamp="2026-02-02 12:21:05 +0000 UTC" firstStartedPulling="2026-02-02 12:21:07.63363494 +0000 UTC m=+5044.637883902" lastFinishedPulling="2026-02-02 12:21:13.100316392 +0000 UTC m=+5050.104565344" observedRunningTime="2026-02-02 12:21:13.704176134 +0000 UTC m=+5050.708425116" watchObservedRunningTime="2026-02-02 12:21:13.708665665 +0000 UTC m=+5050.712914627" Feb 02 12:21:15 crc kubenswrapper[4925]: I0202 12:21:15.796754 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k2fjw" Feb 02 12:21:15 crc kubenswrapper[4925]: I0202 12:21:15.797848 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k2fjw" Feb 02 12:21:16 crc kubenswrapper[4925]: I0202 12:21:16.847875 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k2fjw" podUID="ae326276-9d93-4067-98c4-292d3a626a4f" containerName="registry-server" probeResult="failure" output=< Feb 02 12:21:16 crc kubenswrapper[4925]: timeout: failed to connect service ":50051" within 1s Feb 02 12:21:16 crc kubenswrapper[4925]: > Feb 02 12:21:25 crc kubenswrapper[4925]: I0202 12:21:25.852825 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k2fjw" Feb 02 12:21:25 crc kubenswrapper[4925]: I0202 12:21:25.900149 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k2fjw" Feb 02 12:21:26 crc kubenswrapper[4925]: I0202 12:21:26.097214 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2fjw"] Feb 02 12:21:27 crc kubenswrapper[4925]: I0202 12:21:27.811166 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k2fjw" podUID="ae326276-9d93-4067-98c4-292d3a626a4f" containerName="registry-server" containerID="cri-o://cbb5540a8232d03913930421c789d4af3aa658f5148c8c1270573d7baa73bc43" gracePeriod=2 Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.794355 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2fjw" Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.838599 4925 generic.go:334] "Generic (PLEG): container finished" podID="ae326276-9d93-4067-98c4-292d3a626a4f" containerID="cbb5540a8232d03913930421c789d4af3aa658f5148c8c1270573d7baa73bc43" exitCode=0 Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.838671 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2fjw" event={"ID":"ae326276-9d93-4067-98c4-292d3a626a4f","Type":"ContainerDied","Data":"cbb5540a8232d03913930421c789d4af3aa658f5148c8c1270573d7baa73bc43"} Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.838697 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2fjw" event={"ID":"ae326276-9d93-4067-98c4-292d3a626a4f","Type":"ContainerDied","Data":"b690f54adfda4348de56f2478ef851db911629878a88fd7db741502e08caa218"} Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.838712 4925 scope.go:117] "RemoveContainer" containerID="cbb5540a8232d03913930421c789d4af3aa658f5148c8c1270573d7baa73bc43" Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.838839 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2fjw" Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.859988 4925 scope.go:117] "RemoveContainer" containerID="36722242f107a76b2fc9844c9c1e78520d1c80c039a6fffcc66c5b2055479d2d" Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.883618 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae326276-9d93-4067-98c4-292d3a626a4f-utilities\") pod \"ae326276-9d93-4067-98c4-292d3a626a4f\" (UID: \"ae326276-9d93-4067-98c4-292d3a626a4f\") " Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.883708 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p89l\" (UniqueName: \"kubernetes.io/projected/ae326276-9d93-4067-98c4-292d3a626a4f-kube-api-access-5p89l\") pod \"ae326276-9d93-4067-98c4-292d3a626a4f\" (UID: \"ae326276-9d93-4067-98c4-292d3a626a4f\") " Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.883784 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae326276-9d93-4067-98c4-292d3a626a4f-catalog-content\") pod \"ae326276-9d93-4067-98c4-292d3a626a4f\" (UID: \"ae326276-9d93-4067-98c4-292d3a626a4f\") " Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.884624 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae326276-9d93-4067-98c4-292d3a626a4f-utilities" (OuterVolumeSpecName: "utilities") pod "ae326276-9d93-4067-98c4-292d3a626a4f" (UID: "ae326276-9d93-4067-98c4-292d3a626a4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.889806 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae326276-9d93-4067-98c4-292d3a626a4f-kube-api-access-5p89l" (OuterVolumeSpecName: "kube-api-access-5p89l") pod "ae326276-9d93-4067-98c4-292d3a626a4f" (UID: "ae326276-9d93-4067-98c4-292d3a626a4f"). InnerVolumeSpecName "kube-api-access-5p89l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.897157 4925 scope.go:117] "RemoveContainer" containerID="19a3774e3942848afbce1f6f31e913a35cc6dd1f9034b3ba192072ace6896ce8" Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.960226 4925 scope.go:117] "RemoveContainer" containerID="cbb5540a8232d03913930421c789d4af3aa658f5148c8c1270573d7baa73bc43" Feb 02 12:21:28 crc kubenswrapper[4925]: E0202 12:21:28.960680 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbb5540a8232d03913930421c789d4af3aa658f5148c8c1270573d7baa73bc43\": container with ID starting with cbb5540a8232d03913930421c789d4af3aa658f5148c8c1270573d7baa73bc43 not found: ID does not exist" containerID="cbb5540a8232d03913930421c789d4af3aa658f5148c8c1270573d7baa73bc43" Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.960726 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbb5540a8232d03913930421c789d4af3aa658f5148c8c1270573d7baa73bc43"} err="failed to get container status \"cbb5540a8232d03913930421c789d4af3aa658f5148c8c1270573d7baa73bc43\": rpc error: code = NotFound desc = could not find container \"cbb5540a8232d03913930421c789d4af3aa658f5148c8c1270573d7baa73bc43\": container with ID starting with cbb5540a8232d03913930421c789d4af3aa658f5148c8c1270573d7baa73bc43 not found: ID does not exist" Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.960756 4925 scope.go:117] "RemoveContainer" containerID="36722242f107a76b2fc9844c9c1e78520d1c80c039a6fffcc66c5b2055479d2d" Feb 02 12:21:28 crc kubenswrapper[4925]: E0202 12:21:28.961149 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36722242f107a76b2fc9844c9c1e78520d1c80c039a6fffcc66c5b2055479d2d\": container with ID starting with 36722242f107a76b2fc9844c9c1e78520d1c80c039a6fffcc66c5b2055479d2d not found: ID does not exist" containerID="36722242f107a76b2fc9844c9c1e78520d1c80c039a6fffcc66c5b2055479d2d" Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.961187 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36722242f107a76b2fc9844c9c1e78520d1c80c039a6fffcc66c5b2055479d2d"} err="failed to get container status \"36722242f107a76b2fc9844c9c1e78520d1c80c039a6fffcc66c5b2055479d2d\": rpc error: code = NotFound desc = could not find container \"36722242f107a76b2fc9844c9c1e78520d1c80c039a6fffcc66c5b2055479d2d\": container with ID starting with 36722242f107a76b2fc9844c9c1e78520d1c80c039a6fffcc66c5b2055479d2d not found: ID does not exist" Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.961207 4925 scope.go:117] "RemoveContainer" containerID="19a3774e3942848afbce1f6f31e913a35cc6dd1f9034b3ba192072ace6896ce8" Feb 02 12:21:28 crc kubenswrapper[4925]: E0202 12:21:28.961504 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19a3774e3942848afbce1f6f31e913a35cc6dd1f9034b3ba192072ace6896ce8\": container with ID starting with 19a3774e3942848afbce1f6f31e913a35cc6dd1f9034b3ba192072ace6896ce8 not found: ID does not exist" containerID="19a3774e3942848afbce1f6f31e913a35cc6dd1f9034b3ba192072ace6896ce8" Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.961524 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19a3774e3942848afbce1f6f31e913a35cc6dd1f9034b3ba192072ace6896ce8"} err="failed to get container status \"19a3774e3942848afbce1f6f31e913a35cc6dd1f9034b3ba192072ace6896ce8\": rpc error: code = NotFound desc = could not find container \"19a3774e3942848afbce1f6f31e913a35cc6dd1f9034b3ba192072ace6896ce8\": container with ID starting with 19a3774e3942848afbce1f6f31e913a35cc6dd1f9034b3ba192072ace6896ce8 not found: ID does not exist" Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.986174 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae326276-9d93-4067-98c4-292d3a626a4f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 12:21:28 crc kubenswrapper[4925]: I0202 12:21:28.986215 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p89l\" (UniqueName: \"kubernetes.io/projected/ae326276-9d93-4067-98c4-292d3a626a4f-kube-api-access-5p89l\") on node \"crc\" DevicePath \"\"" Feb 02 12:21:29 crc kubenswrapper[4925]: I0202 12:21:29.039034 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae326276-9d93-4067-98c4-292d3a626a4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae326276-9d93-4067-98c4-292d3a626a4f" (UID: "ae326276-9d93-4067-98c4-292d3a626a4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:21:29 crc kubenswrapper[4925]: I0202 12:21:29.087568 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae326276-9d93-4067-98c4-292d3a626a4f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 12:21:29 crc kubenswrapper[4925]: I0202 12:21:29.180271 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2fjw"] Feb 02 12:21:29 crc kubenswrapper[4925]: I0202 12:21:29.188806 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k2fjw"] Feb 02 12:21:30 crc kubenswrapper[4925]: I0202 12:21:30.675408 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae326276-9d93-4067-98c4-292d3a626a4f" path="/var/lib/kubelet/pods/ae326276-9d93-4067-98c4-292d3a626a4f/volumes" Feb 02 12:21:31 crc kubenswrapper[4925]: I0202 12:21:31.385049 4925 scope.go:117] "RemoveContainer" containerID="ab2c7bee3c54d3943d65a047d69a1ce19ce65bd4cfa82510c29c2b61029c37a4" Feb 02 12:21:43 crc kubenswrapper[4925]: I0202 12:21:43.398680 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:21:43 crc kubenswrapper[4925]: I0202 12:21:43.399473 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:22:13 crc kubenswrapper[4925]: I0202 12:22:13.398540 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:22:13 crc kubenswrapper[4925]: I0202 12:22:13.398934 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:22:13 crc kubenswrapper[4925]: I0202 12:22:13.398976 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 12:22:13 crc kubenswrapper[4925]: I0202 12:22:13.399722 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 12:22:13 crc kubenswrapper[4925]: I0202 12:22:13.399774 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" gracePeriod=600 Feb 02 12:22:13 crc kubenswrapper[4925]: E0202 12:22:13.523837 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:22:14 crc kubenswrapper[4925]: I0202 12:22:14.246322 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" exitCode=0 Feb 02 12:22:14 crc kubenswrapper[4925]: I0202 12:22:14.246378 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5"} Feb 02 12:22:14 crc kubenswrapper[4925]: I0202 12:22:14.246876 4925 scope.go:117] "RemoveContainer" containerID="c5d69f58afc7bdb04bc664b8e1f93d1bef487fdc416cef9285aac62e414c7431" Feb 02 12:22:14 crc kubenswrapper[4925]: I0202 12:22:14.247474 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:22:14 crc kubenswrapper[4925]: E0202 12:22:14.247956 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:22:26 crc kubenswrapper[4925]: I0202 12:22:26.664563 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:22:26 crc kubenswrapper[4925]: E0202 12:22:26.666394 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:22:31 crc kubenswrapper[4925]: I0202 12:22:31.465451 4925 scope.go:117] "RemoveContainer" containerID="3ddad9514b7ae5ef693358326ef3d58ce11686c047809b93b5160da06d570cad" Feb 02 12:22:40 crc kubenswrapper[4925]: I0202 12:22:40.664665 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:22:40 crc kubenswrapper[4925]: E0202 12:22:40.665337 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:22:52 crc kubenswrapper[4925]: I0202 12:22:52.665592 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:22:52 crc kubenswrapper[4925]: E0202 12:22:52.666588 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:23:05 crc kubenswrapper[4925]: I0202 12:23:05.664788 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:23:05 crc kubenswrapper[4925]: E0202 12:23:05.665525 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:23:19 crc kubenswrapper[4925]: I0202 12:23:19.664535 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:23:19 crc kubenswrapper[4925]: E0202 12:23:19.667405 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:23:30 crc kubenswrapper[4925]: I0202 12:23:30.665218 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:23:30 crc kubenswrapper[4925]: E0202 12:23:30.665998 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:23:45 crc kubenswrapper[4925]: I0202 12:23:45.664211 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:23:45 crc kubenswrapper[4925]: E0202 12:23:45.665066 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:23:59 crc kubenswrapper[4925]: I0202 12:23:59.665129 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:23:59 crc kubenswrapper[4925]: E0202 12:23:59.666189 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.062469 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fz5vx/must-gather-v4wb9"] Feb 02 12:24:00 crc kubenswrapper[4925]: E0202 12:24:00.063183 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae326276-9d93-4067-98c4-292d3a626a4f" containerName="extract-utilities" Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.063204 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae326276-9d93-4067-98c4-292d3a626a4f" containerName="extract-utilities" Feb 02 12:24:00 crc kubenswrapper[4925]: E0202 12:24:00.063222 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae326276-9d93-4067-98c4-292d3a626a4f" containerName="extract-content" Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.063229 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae326276-9d93-4067-98c4-292d3a626a4f" containerName="extract-content" Feb 02 12:24:00 crc kubenswrapper[4925]: E0202 12:24:00.063254 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae326276-9d93-4067-98c4-292d3a626a4f" containerName="registry-server" Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.063263 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae326276-9d93-4067-98c4-292d3a626a4f" containerName="registry-server" Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.063442 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae326276-9d93-4067-98c4-292d3a626a4f" containerName="registry-server" Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.064455 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fz5vx/must-gather-v4wb9" Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.066721 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-fz5vx"/"default-dockercfg-rwxfv" Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.067330 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fz5vx"/"kube-root-ca.crt" Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.069573 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fz5vx"/"openshift-service-ca.crt" Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.079346 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fz5vx/must-gather-v4wb9"] Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.211277 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plfbs\" (UniqueName: \"kubernetes.io/projected/7e7bc800-7224-4c0f-9e73-f93a1ad76039-kube-api-access-plfbs\") pod \"must-gather-v4wb9\" (UID: \"7e7bc800-7224-4c0f-9e73-f93a1ad76039\") " pod="openshift-must-gather-fz5vx/must-gather-v4wb9" Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.211405 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7e7bc800-7224-4c0f-9e73-f93a1ad76039-must-gather-output\") pod \"must-gather-v4wb9\" (UID: \"7e7bc800-7224-4c0f-9e73-f93a1ad76039\") " pod="openshift-must-gather-fz5vx/must-gather-v4wb9" Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.313266 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7e7bc800-7224-4c0f-9e73-f93a1ad76039-must-gather-output\") pod \"must-gather-v4wb9\" (UID: \"7e7bc800-7224-4c0f-9e73-f93a1ad76039\") " pod="openshift-must-gather-fz5vx/must-gather-v4wb9" Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.313460 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plfbs\" (UniqueName: \"kubernetes.io/projected/7e7bc800-7224-4c0f-9e73-f93a1ad76039-kube-api-access-plfbs\") pod \"must-gather-v4wb9\" (UID: \"7e7bc800-7224-4c0f-9e73-f93a1ad76039\") " pod="openshift-must-gather-fz5vx/must-gather-v4wb9" Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.313788 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7e7bc800-7224-4c0f-9e73-f93a1ad76039-must-gather-output\") pod \"must-gather-v4wb9\" (UID: \"7e7bc800-7224-4c0f-9e73-f93a1ad76039\") " pod="openshift-must-gather-fz5vx/must-gather-v4wb9" Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.335040 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plfbs\" (UniqueName: \"kubernetes.io/projected/7e7bc800-7224-4c0f-9e73-f93a1ad76039-kube-api-access-plfbs\") pod \"must-gather-v4wb9\" (UID: \"7e7bc800-7224-4c0f-9e73-f93a1ad76039\") " pod="openshift-must-gather-fz5vx/must-gather-v4wb9" Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.381395 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fz5vx/must-gather-v4wb9" Feb 02 12:24:00 crc kubenswrapper[4925]: I0202 12:24:00.783686 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fz5vx/must-gather-v4wb9"] Feb 02 12:24:01 crc kubenswrapper[4925]: I0202 12:24:01.316635 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fz5vx/must-gather-v4wb9" event={"ID":"7e7bc800-7224-4c0f-9e73-f93a1ad76039","Type":"ContainerStarted","Data":"eaa2ae3b0f057c850ef93333acc00b1ea5ff5fa80e3ae414e3deae46d5c1e5bc"} Feb 02 12:24:01 crc kubenswrapper[4925]: I0202 12:24:01.316928 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fz5vx/must-gather-v4wb9" event={"ID":"7e7bc800-7224-4c0f-9e73-f93a1ad76039","Type":"ContainerStarted","Data":"da4bb0b0f1fcf04d2a87591e06d2477ce429f06ef285b74e82a840925d849d77"} Feb 02 12:24:02 crc kubenswrapper[4925]: I0202 12:24:02.332589 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fz5vx/must-gather-v4wb9" event={"ID":"7e7bc800-7224-4c0f-9e73-f93a1ad76039","Type":"ContainerStarted","Data":"f7403ca33f2edc36fe3968547d2efead1248054be915b4ebc1f37e825c281fcb"} Feb 02 12:24:02 crc kubenswrapper[4925]: I0202 12:24:02.356951 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fz5vx/must-gather-v4wb9" podStartSLOduration=2.35692728 podStartE2EDuration="2.35692728s" podCreationTimestamp="2026-02-02 12:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 12:24:02.349707525 +0000 UTC m=+5219.353956527" watchObservedRunningTime="2026-02-02 12:24:02.35692728 +0000 UTC m=+5219.361176242" Feb 02 12:24:05 crc kubenswrapper[4925]: I0202 12:24:05.187970 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fz5vx/crc-debug-9bsqb"] Feb 02 12:24:05 crc kubenswrapper[4925]: I0202 12:24:05.189684 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fz5vx/crc-debug-9bsqb" Feb 02 12:24:05 crc kubenswrapper[4925]: I0202 12:24:05.308478 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7lsx\" (UniqueName: \"kubernetes.io/projected/49fd6e4b-cb81-435e-a2d3-c820cc62c432-kube-api-access-f7lsx\") pod \"crc-debug-9bsqb\" (UID: \"49fd6e4b-cb81-435e-a2d3-c820cc62c432\") " pod="openshift-must-gather-fz5vx/crc-debug-9bsqb" Feb 02 12:24:05 crc kubenswrapper[4925]: I0202 12:24:05.308536 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49fd6e4b-cb81-435e-a2d3-c820cc62c432-host\") pod \"crc-debug-9bsqb\" (UID: \"49fd6e4b-cb81-435e-a2d3-c820cc62c432\") " pod="openshift-must-gather-fz5vx/crc-debug-9bsqb" Feb 02 12:24:05 crc kubenswrapper[4925]: I0202 12:24:05.410169 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7lsx\" (UniqueName: \"kubernetes.io/projected/49fd6e4b-cb81-435e-a2d3-c820cc62c432-kube-api-access-f7lsx\") pod \"crc-debug-9bsqb\" (UID: \"49fd6e4b-cb81-435e-a2d3-c820cc62c432\") " pod="openshift-must-gather-fz5vx/crc-debug-9bsqb" Feb 02 12:24:05 crc kubenswrapper[4925]: I0202 12:24:05.410458 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49fd6e4b-cb81-435e-a2d3-c820cc62c432-host\") pod \"crc-debug-9bsqb\" (UID: \"49fd6e4b-cb81-435e-a2d3-c820cc62c432\") " pod="openshift-must-gather-fz5vx/crc-debug-9bsqb" Feb 02 12:24:05 crc kubenswrapper[4925]: I0202 12:24:05.410671 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49fd6e4b-cb81-435e-a2d3-c820cc62c432-host\") pod \"crc-debug-9bsqb\" (UID: \"49fd6e4b-cb81-435e-a2d3-c820cc62c432\") " pod="openshift-must-gather-fz5vx/crc-debug-9bsqb" Feb 02 12:24:05 crc kubenswrapper[4925]: I0202 12:24:05.429867 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7lsx\" (UniqueName: \"kubernetes.io/projected/49fd6e4b-cb81-435e-a2d3-c820cc62c432-kube-api-access-f7lsx\") pod \"crc-debug-9bsqb\" (UID: \"49fd6e4b-cb81-435e-a2d3-c820cc62c432\") " pod="openshift-must-gather-fz5vx/crc-debug-9bsqb" Feb 02 12:24:05 crc kubenswrapper[4925]: I0202 12:24:05.521150 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fz5vx/crc-debug-9bsqb" Feb 02 12:24:06 crc kubenswrapper[4925]: I0202 12:24:06.367192 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fz5vx/crc-debug-9bsqb" event={"ID":"49fd6e4b-cb81-435e-a2d3-c820cc62c432","Type":"ContainerStarted","Data":"27a598859bc979ef16dbad6b078a643aabf161a9cdf375dca4d5fc55753c9a4d"} Feb 02 12:24:06 crc kubenswrapper[4925]: I0202 12:24:06.367823 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fz5vx/crc-debug-9bsqb" event={"ID":"49fd6e4b-cb81-435e-a2d3-c820cc62c432","Type":"ContainerStarted","Data":"3c34a269479a71540f67f58ae39970118d5cb54075933f945aa903a9da72a9d2"} Feb 02 12:24:06 crc kubenswrapper[4925]: I0202 12:24:06.386444 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fz5vx/crc-debug-9bsqb" podStartSLOduration=1.38641649 podStartE2EDuration="1.38641649s" podCreationTimestamp="2026-02-02 12:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 12:24:06.380806698 +0000 UTC m=+5223.385055660" watchObservedRunningTime="2026-02-02 12:24:06.38641649 +0000 UTC m=+5223.390665442" Feb 02 12:24:12 crc kubenswrapper[4925]: I0202 12:24:12.664622 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:24:12 crc kubenswrapper[4925]: E0202 12:24:12.665613 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:24:23 crc kubenswrapper[4925]: I0202 12:24:23.666371 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:24:23 crc kubenswrapper[4925]: E0202 12:24:23.667700 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:24:38 crc kubenswrapper[4925]: I0202 12:24:38.664741 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:24:38 crc kubenswrapper[4925]: E0202 12:24:38.665504 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:24:46 crc kubenswrapper[4925]: I0202 12:24:46.695864 4925 generic.go:334] "Generic (PLEG): container finished" podID="49fd6e4b-cb81-435e-a2d3-c820cc62c432" containerID="27a598859bc979ef16dbad6b078a643aabf161a9cdf375dca4d5fc55753c9a4d" exitCode=0 Feb 02 12:24:46 crc kubenswrapper[4925]: I0202 12:24:46.695975 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fz5vx/crc-debug-9bsqb" event={"ID":"49fd6e4b-cb81-435e-a2d3-c820cc62c432","Type":"ContainerDied","Data":"27a598859bc979ef16dbad6b078a643aabf161a9cdf375dca4d5fc55753c9a4d"} Feb 02 12:24:47 crc kubenswrapper[4925]: I0202 12:24:47.963055 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fz5vx/crc-debug-9bsqb" Feb 02 12:24:47 crc kubenswrapper[4925]: I0202 12:24:47.992310 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fz5vx/crc-debug-9bsqb"] Feb 02 12:24:48 crc kubenswrapper[4925]: I0202 12:24:48.002642 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fz5vx/crc-debug-9bsqb"] Feb 02 12:24:48 crc kubenswrapper[4925]: I0202 12:24:48.065829 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7lsx\" (UniqueName: \"kubernetes.io/projected/49fd6e4b-cb81-435e-a2d3-c820cc62c432-kube-api-access-f7lsx\") pod \"49fd6e4b-cb81-435e-a2d3-c820cc62c432\" (UID: \"49fd6e4b-cb81-435e-a2d3-c820cc62c432\") " Feb 02 12:24:48 crc kubenswrapper[4925]: I0202 12:24:48.066041 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49fd6e4b-cb81-435e-a2d3-c820cc62c432-host\") pod \"49fd6e4b-cb81-435e-a2d3-c820cc62c432\" (UID: \"49fd6e4b-cb81-435e-a2d3-c820cc62c432\") " Feb 02 12:24:48 crc kubenswrapper[4925]: I0202 12:24:48.066496 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fd6e4b-cb81-435e-a2d3-c820cc62c432-host" (OuterVolumeSpecName: "host") pod "49fd6e4b-cb81-435e-a2d3-c820cc62c432" (UID: "49fd6e4b-cb81-435e-a2d3-c820cc62c432"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 12:24:48 crc kubenswrapper[4925]: I0202 12:24:48.088376 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49fd6e4b-cb81-435e-a2d3-c820cc62c432-kube-api-access-f7lsx" (OuterVolumeSpecName: "kube-api-access-f7lsx") pod "49fd6e4b-cb81-435e-a2d3-c820cc62c432" (UID: "49fd6e4b-cb81-435e-a2d3-c820cc62c432"). InnerVolumeSpecName "kube-api-access-f7lsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:24:48 crc kubenswrapper[4925]: I0202 12:24:48.168363 4925 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49fd6e4b-cb81-435e-a2d3-c820cc62c432-host\") on node \"crc\" DevicePath \"\"" Feb 02 12:24:48 crc kubenswrapper[4925]: I0202 12:24:48.168405 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7lsx\" (UniqueName: \"kubernetes.io/projected/49fd6e4b-cb81-435e-a2d3-c820cc62c432-kube-api-access-f7lsx\") on node \"crc\" DevicePath \"\"" Feb 02 12:24:48 crc kubenswrapper[4925]: I0202 12:24:48.675260 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49fd6e4b-cb81-435e-a2d3-c820cc62c432" path="/var/lib/kubelet/pods/49fd6e4b-cb81-435e-a2d3-c820cc62c432/volumes" Feb 02 12:24:48 crc kubenswrapper[4925]: I0202 12:24:48.724947 4925 scope.go:117] "RemoveContainer" containerID="27a598859bc979ef16dbad6b078a643aabf161a9cdf375dca4d5fc55753c9a4d" Feb 02 12:24:48 crc kubenswrapper[4925]: I0202 12:24:48.724980 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fz5vx/crc-debug-9bsqb" Feb 02 12:24:49 crc kubenswrapper[4925]: I0202 12:24:49.218817 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fz5vx/crc-debug-wtjrf"] Feb 02 12:24:49 crc kubenswrapper[4925]: E0202 12:24:49.219642 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fd6e4b-cb81-435e-a2d3-c820cc62c432" containerName="container-00" Feb 02 12:24:49 crc kubenswrapper[4925]: I0202 12:24:49.219657 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fd6e4b-cb81-435e-a2d3-c820cc62c432" containerName="container-00" Feb 02 12:24:49 crc kubenswrapper[4925]: I0202 12:24:49.219916 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fd6e4b-cb81-435e-a2d3-c820cc62c432" containerName="container-00" Feb 02 12:24:49 crc kubenswrapper[4925]: I0202 12:24:49.220666 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fz5vx/crc-debug-wtjrf" Feb 02 12:24:49 crc kubenswrapper[4925]: I0202 12:24:49.289395 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f7814005-64ca-4a55-b850-ed3a9f6718e9-host\") pod \"crc-debug-wtjrf\" (UID: \"f7814005-64ca-4a55-b850-ed3a9f6718e9\") " pod="openshift-must-gather-fz5vx/crc-debug-wtjrf" Feb 02 12:24:49 crc kubenswrapper[4925]: I0202 12:24:49.289479 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j72t5\" (UniqueName: \"kubernetes.io/projected/f7814005-64ca-4a55-b850-ed3a9f6718e9-kube-api-access-j72t5\") pod \"crc-debug-wtjrf\" (UID: \"f7814005-64ca-4a55-b850-ed3a9f6718e9\") " pod="openshift-must-gather-fz5vx/crc-debug-wtjrf" Feb 02 12:24:49 crc kubenswrapper[4925]: I0202 12:24:49.391462 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f7814005-64ca-4a55-b850-ed3a9f6718e9-host\") pod \"crc-debug-wtjrf\" (UID: \"f7814005-64ca-4a55-b850-ed3a9f6718e9\") " pod="openshift-must-gather-fz5vx/crc-debug-wtjrf" Feb 02 12:24:49 crc kubenswrapper[4925]: I0202 12:24:49.391768 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j72t5\" (UniqueName: \"kubernetes.io/projected/f7814005-64ca-4a55-b850-ed3a9f6718e9-kube-api-access-j72t5\") pod \"crc-debug-wtjrf\" (UID: \"f7814005-64ca-4a55-b850-ed3a9f6718e9\") " pod="openshift-must-gather-fz5vx/crc-debug-wtjrf" Feb 02 12:24:49 crc kubenswrapper[4925]: I0202 12:24:49.391634 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f7814005-64ca-4a55-b850-ed3a9f6718e9-host\") pod \"crc-debug-wtjrf\" (UID: \"f7814005-64ca-4a55-b850-ed3a9f6718e9\") " pod="openshift-must-gather-fz5vx/crc-debug-wtjrf" Feb 02 12:24:49 crc kubenswrapper[4925]: I0202 12:24:49.411532 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j72t5\" (UniqueName: \"kubernetes.io/projected/f7814005-64ca-4a55-b850-ed3a9f6718e9-kube-api-access-j72t5\") pod \"crc-debug-wtjrf\" (UID: \"f7814005-64ca-4a55-b850-ed3a9f6718e9\") " pod="openshift-must-gather-fz5vx/crc-debug-wtjrf" Feb 02 12:24:49 crc kubenswrapper[4925]: I0202 12:24:49.539567 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fz5vx/crc-debug-wtjrf" Feb 02 12:24:49 crc kubenswrapper[4925]: I0202 12:24:49.733861 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fz5vx/crc-debug-wtjrf" event={"ID":"f7814005-64ca-4a55-b850-ed3a9f6718e9","Type":"ContainerStarted","Data":"da7f9c9e7cbb3d176df120d79e1213748e41e3a3e26307ffd5e5f07f24ed27ba"} Feb 02 12:24:50 crc kubenswrapper[4925]: I0202 12:24:50.664606 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:24:50 crc kubenswrapper[4925]: E0202 12:24:50.665263 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:24:50 crc kubenswrapper[4925]: I0202 12:24:50.746105 4925 generic.go:334] "Generic (PLEG): container finished" podID="f7814005-64ca-4a55-b850-ed3a9f6718e9" containerID="748526a2aab14ed3bb3c84e3ea9688cab32cd8dcf8e5c2f45d2271165de25d2b" exitCode=0 Feb 02 12:24:50 crc kubenswrapper[4925]: I0202 12:24:50.746340 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fz5vx/crc-debug-wtjrf" event={"ID":"f7814005-64ca-4a55-b850-ed3a9f6718e9","Type":"ContainerDied","Data":"748526a2aab14ed3bb3c84e3ea9688cab32cd8dcf8e5c2f45d2271165de25d2b"} Feb 02 12:24:51 crc kubenswrapper[4925]: I0202 12:24:51.847835 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fz5vx/crc-debug-wtjrf" Feb 02 12:24:51 crc kubenswrapper[4925]: I0202 12:24:51.935461 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f7814005-64ca-4a55-b850-ed3a9f6718e9-host\") pod \"f7814005-64ca-4a55-b850-ed3a9f6718e9\" (UID: \"f7814005-64ca-4a55-b850-ed3a9f6718e9\") " Feb 02 12:24:51 crc kubenswrapper[4925]: I0202 12:24:51.935581 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7814005-64ca-4a55-b850-ed3a9f6718e9-host" (OuterVolumeSpecName: "host") pod "f7814005-64ca-4a55-b850-ed3a9f6718e9" (UID: "f7814005-64ca-4a55-b850-ed3a9f6718e9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 12:24:51 crc kubenswrapper[4925]: I0202 12:24:51.935621 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j72t5\" (UniqueName: \"kubernetes.io/projected/f7814005-64ca-4a55-b850-ed3a9f6718e9-kube-api-access-j72t5\") pod \"f7814005-64ca-4a55-b850-ed3a9f6718e9\" (UID: \"f7814005-64ca-4a55-b850-ed3a9f6718e9\") " Feb 02 12:24:51 crc kubenswrapper[4925]: I0202 12:24:51.936269 4925 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f7814005-64ca-4a55-b850-ed3a9f6718e9-host\") on node \"crc\" DevicePath \"\"" Feb 02 12:24:51 crc kubenswrapper[4925]: I0202 12:24:51.942195 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7814005-64ca-4a55-b850-ed3a9f6718e9-kube-api-access-j72t5" (OuterVolumeSpecName: "kube-api-access-j72t5") pod "f7814005-64ca-4a55-b850-ed3a9f6718e9" (UID: "f7814005-64ca-4a55-b850-ed3a9f6718e9"). InnerVolumeSpecName "kube-api-access-j72t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:24:52 crc kubenswrapper[4925]: I0202 12:24:52.040677 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j72t5\" (UniqueName: \"kubernetes.io/projected/f7814005-64ca-4a55-b850-ed3a9f6718e9-kube-api-access-j72t5\") on node \"crc\" DevicePath \"\"" Feb 02 12:24:52 crc kubenswrapper[4925]: I0202 12:24:52.772354 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fz5vx/crc-debug-wtjrf" event={"ID":"f7814005-64ca-4a55-b850-ed3a9f6718e9","Type":"ContainerDied","Data":"da7f9c9e7cbb3d176df120d79e1213748e41e3a3e26307ffd5e5f07f24ed27ba"} Feb 02 12:24:52 crc kubenswrapper[4925]: I0202 12:24:52.772400 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da7f9c9e7cbb3d176df120d79e1213748e41e3a3e26307ffd5e5f07f24ed27ba" Feb 02 12:24:52 crc kubenswrapper[4925]: I0202 12:24:52.772466 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fz5vx/crc-debug-wtjrf" Feb 02 12:24:52 crc kubenswrapper[4925]: I0202 12:24:52.999348 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fz5vx/crc-debug-wtjrf"] Feb 02 12:24:53 crc kubenswrapper[4925]: I0202 12:24:53.009178 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fz5vx/crc-debug-wtjrf"] Feb 02 12:24:54 crc kubenswrapper[4925]: I0202 12:24:54.166973 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fz5vx/crc-debug-2qgln"] Feb 02 12:24:54 crc kubenswrapper[4925]: E0202 12:24:54.167636 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7814005-64ca-4a55-b850-ed3a9f6718e9" containerName="container-00" Feb 02 12:24:54 crc kubenswrapper[4925]: I0202 12:24:54.167650 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7814005-64ca-4a55-b850-ed3a9f6718e9" containerName="container-00" Feb 02 12:24:54 crc kubenswrapper[4925]: I0202 12:24:54.167824 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7814005-64ca-4a55-b850-ed3a9f6718e9" containerName="container-00" Feb 02 12:24:54 crc kubenswrapper[4925]: I0202 12:24:54.168452 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fz5vx/crc-debug-2qgln" Feb 02 12:24:54 crc kubenswrapper[4925]: I0202 12:24:54.276532 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbgtw\" (UniqueName: \"kubernetes.io/projected/28528ea0-e8fb-495e-aeea-f784fc488da1-kube-api-access-lbgtw\") pod \"crc-debug-2qgln\" (UID: \"28528ea0-e8fb-495e-aeea-f784fc488da1\") " pod="openshift-must-gather-fz5vx/crc-debug-2qgln" Feb 02 12:24:54 crc kubenswrapper[4925]: I0202 12:24:54.276675 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28528ea0-e8fb-495e-aeea-f784fc488da1-host\") pod \"crc-debug-2qgln\" (UID: \"28528ea0-e8fb-495e-aeea-f784fc488da1\") " pod="openshift-must-gather-fz5vx/crc-debug-2qgln" Feb 02 12:24:54 crc kubenswrapper[4925]: I0202 12:24:54.378992 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28528ea0-e8fb-495e-aeea-f784fc488da1-host\") pod \"crc-debug-2qgln\" (UID: \"28528ea0-e8fb-495e-aeea-f784fc488da1\") " pod="openshift-must-gather-fz5vx/crc-debug-2qgln" Feb 02 12:24:54 crc kubenswrapper[4925]: I0202 12:24:54.379213 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28528ea0-e8fb-495e-aeea-f784fc488da1-host\") pod \"crc-debug-2qgln\" (UID: \"28528ea0-e8fb-495e-aeea-f784fc488da1\") " pod="openshift-must-gather-fz5vx/crc-debug-2qgln" Feb 02 12:24:54 crc kubenswrapper[4925]: I0202 12:24:54.379228 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbgtw\" (UniqueName: \"kubernetes.io/projected/28528ea0-e8fb-495e-aeea-f784fc488da1-kube-api-access-lbgtw\") pod \"crc-debug-2qgln\" (UID: \"28528ea0-e8fb-495e-aeea-f784fc488da1\") " pod="openshift-must-gather-fz5vx/crc-debug-2qgln" Feb 02 12:24:54 crc kubenswrapper[4925]: I0202 12:24:54.411944 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbgtw\" (UniqueName: \"kubernetes.io/projected/28528ea0-e8fb-495e-aeea-f784fc488da1-kube-api-access-lbgtw\") pod \"crc-debug-2qgln\" (UID: \"28528ea0-e8fb-495e-aeea-f784fc488da1\") " pod="openshift-must-gather-fz5vx/crc-debug-2qgln" Feb 02 12:24:54 crc kubenswrapper[4925]: I0202 12:24:54.485762 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fz5vx/crc-debug-2qgln" Feb 02 12:24:54 crc kubenswrapper[4925]: I0202 12:24:54.674128 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7814005-64ca-4a55-b850-ed3a9f6718e9" path="/var/lib/kubelet/pods/f7814005-64ca-4a55-b850-ed3a9f6718e9/volumes" Feb 02 12:24:54 crc kubenswrapper[4925]: I0202 12:24:54.790445 4925 generic.go:334] "Generic (PLEG): container finished" podID="28528ea0-e8fb-495e-aeea-f784fc488da1" containerID="b14fb93820f479941ed567d7b57e01e207609b6b650eafb0d16a1ccad621696f" exitCode=0 Feb 02 12:24:54 crc kubenswrapper[4925]: I0202 12:24:54.790506 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fz5vx/crc-debug-2qgln" event={"ID":"28528ea0-e8fb-495e-aeea-f784fc488da1","Type":"ContainerDied","Data":"b14fb93820f479941ed567d7b57e01e207609b6b650eafb0d16a1ccad621696f"} Feb 02 12:24:54 crc kubenswrapper[4925]: I0202 12:24:54.790536 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fz5vx/crc-debug-2qgln" event={"ID":"28528ea0-e8fb-495e-aeea-f784fc488da1","Type":"ContainerStarted","Data":"828cf242263b1a77dd1b59780274a2ccb7776019b9ea68dd38d5e131c866f8cd"} Feb 02 12:24:54 crc kubenswrapper[4925]: I0202 12:24:54.840768 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fz5vx/crc-debug-2qgln"] Feb 02 12:24:54 crc kubenswrapper[4925]: I0202 12:24:54.856771 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fz5vx/crc-debug-2qgln"] Feb 02 12:24:55 crc kubenswrapper[4925]: I0202 12:24:55.910638 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fz5vx/crc-debug-2qgln" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.012941 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbgtw\" (UniqueName: \"kubernetes.io/projected/28528ea0-e8fb-495e-aeea-f784fc488da1-kube-api-access-lbgtw\") pod \"28528ea0-e8fb-495e-aeea-f784fc488da1\" (UID: \"28528ea0-e8fb-495e-aeea-f784fc488da1\") " Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.013066 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28528ea0-e8fb-495e-aeea-f784fc488da1-host\") pod \"28528ea0-e8fb-495e-aeea-f784fc488da1\" (UID: \"28528ea0-e8fb-495e-aeea-f784fc488da1\") " Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.013455 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28528ea0-e8fb-495e-aeea-f784fc488da1-host" (OuterVolumeSpecName: "host") pod "28528ea0-e8fb-495e-aeea-f784fc488da1" (UID: "28528ea0-e8fb-495e-aeea-f784fc488da1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.013573 4925 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28528ea0-e8fb-495e-aeea-f784fc488da1-host\") on node \"crc\" DevicePath \"\"" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.020519 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28528ea0-e8fb-495e-aeea-f784fc488da1-kube-api-access-lbgtw" (OuterVolumeSpecName: "kube-api-access-lbgtw") pod "28528ea0-e8fb-495e-aeea-f784fc488da1" (UID: "28528ea0-e8fb-495e-aeea-f784fc488da1"). InnerVolumeSpecName "kube-api-access-lbgtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.072521 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p4qc2"] Feb 02 12:24:56 crc kubenswrapper[4925]: E0202 12:24:56.073029 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28528ea0-e8fb-495e-aeea-f784fc488da1" containerName="container-00" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.073047 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="28528ea0-e8fb-495e-aeea-f784fc488da1" containerName="container-00" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.073265 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="28528ea0-e8fb-495e-aeea-f784fc488da1" containerName="container-00" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.074566 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4qc2" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.094160 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p4qc2"] Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.115192 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbgtw\" (UniqueName: \"kubernetes.io/projected/28528ea0-e8fb-495e-aeea-f784fc488da1-kube-api-access-lbgtw\") on node \"crc\" DevicePath \"\"" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.216797 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5gd7\" (UniqueName: \"kubernetes.io/projected/3830ab74-f43b-4887-9389-ca6b38d2a09d-kube-api-access-l5gd7\") pod \"certified-operators-p4qc2\" (UID: \"3830ab74-f43b-4887-9389-ca6b38d2a09d\") " pod="openshift-marketplace/certified-operators-p4qc2" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.217235 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3830ab74-f43b-4887-9389-ca6b38d2a09d-utilities\") pod \"certified-operators-p4qc2\" (UID: \"3830ab74-f43b-4887-9389-ca6b38d2a09d\") " pod="openshift-marketplace/certified-operators-p4qc2" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.217336 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3830ab74-f43b-4887-9389-ca6b38d2a09d-catalog-content\") pod \"certified-operators-p4qc2\" (UID: \"3830ab74-f43b-4887-9389-ca6b38d2a09d\") " pod="openshift-marketplace/certified-operators-p4qc2" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.319151 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5gd7\" (UniqueName: \"kubernetes.io/projected/3830ab74-f43b-4887-9389-ca6b38d2a09d-kube-api-access-l5gd7\") pod \"certified-operators-p4qc2\" (UID: \"3830ab74-f43b-4887-9389-ca6b38d2a09d\") " pod="openshift-marketplace/certified-operators-p4qc2" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.319309 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3830ab74-f43b-4887-9389-ca6b38d2a09d-utilities\") pod \"certified-operators-p4qc2\" (UID: \"3830ab74-f43b-4887-9389-ca6b38d2a09d\") " pod="openshift-marketplace/certified-operators-p4qc2" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.319341 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3830ab74-f43b-4887-9389-ca6b38d2a09d-catalog-content\") pod \"certified-operators-p4qc2\" (UID: \"3830ab74-f43b-4887-9389-ca6b38d2a09d\") " pod="openshift-marketplace/certified-operators-p4qc2" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.319806 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3830ab74-f43b-4887-9389-ca6b38d2a09d-catalog-content\") pod \"certified-operators-p4qc2\" (UID: \"3830ab74-f43b-4887-9389-ca6b38d2a09d\") " pod="openshift-marketplace/certified-operators-p4qc2" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.319913 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3830ab74-f43b-4887-9389-ca6b38d2a09d-utilities\") pod \"certified-operators-p4qc2\" (UID: \"3830ab74-f43b-4887-9389-ca6b38d2a09d\") " pod="openshift-marketplace/certified-operators-p4qc2" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.340058 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5gd7\" (UniqueName: \"kubernetes.io/projected/3830ab74-f43b-4887-9389-ca6b38d2a09d-kube-api-access-l5gd7\") pod \"certified-operators-p4qc2\" (UID: \"3830ab74-f43b-4887-9389-ca6b38d2a09d\") " pod="openshift-marketplace/certified-operators-p4qc2" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.394838 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4qc2" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.679943 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28528ea0-e8fb-495e-aeea-f784fc488da1" path="/var/lib/kubelet/pods/28528ea0-e8fb-495e-aeea-f784fc488da1/volumes" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.812408 4925 scope.go:117] "RemoveContainer" containerID="b14fb93820f479941ed567d7b57e01e207609b6b650eafb0d16a1ccad621696f" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.812587 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fz5vx/crc-debug-2qgln" Feb 02 12:24:56 crc kubenswrapper[4925]: I0202 12:24:56.957596 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p4qc2"] Feb 02 12:24:57 crc kubenswrapper[4925]: I0202 12:24:57.826837 4925 generic.go:334] "Generic (PLEG): container finished" podID="3830ab74-f43b-4887-9389-ca6b38d2a09d" containerID="e07af7459b4cff9b272d8de1b73e0faf46a3e26bba74a2e663a69f57a35ff862" exitCode=0 Feb 02 12:24:57 crc kubenswrapper[4925]: I0202 12:24:57.827140 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4qc2" event={"ID":"3830ab74-f43b-4887-9389-ca6b38d2a09d","Type":"ContainerDied","Data":"e07af7459b4cff9b272d8de1b73e0faf46a3e26bba74a2e663a69f57a35ff862"} Feb 02 12:24:57 crc kubenswrapper[4925]: I0202 12:24:57.827166 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4qc2" event={"ID":"3830ab74-f43b-4887-9389-ca6b38d2a09d","Type":"ContainerStarted","Data":"96683852a13129fd1ba56d189f65d9540deee900fbba373f92f496c4db0b2f2c"} Feb 02 12:24:58 crc kubenswrapper[4925]: I0202 12:24:58.853089 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4qc2" event={"ID":"3830ab74-f43b-4887-9389-ca6b38d2a09d","Type":"ContainerStarted","Data":"2bacfeece552368b64cc29c6c705acc383c8fb0f5253b61acc9a43bc51763647"} Feb 02 12:24:59 crc kubenswrapper[4925]: I0202 12:24:59.861979 4925 generic.go:334] "Generic (PLEG): container finished" podID="3830ab74-f43b-4887-9389-ca6b38d2a09d" containerID="2bacfeece552368b64cc29c6c705acc383c8fb0f5253b61acc9a43bc51763647" exitCode=0 Feb 02 12:24:59 crc kubenswrapper[4925]: I0202 12:24:59.862177 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4qc2" event={"ID":"3830ab74-f43b-4887-9389-ca6b38d2a09d","Type":"ContainerDied","Data":"2bacfeece552368b64cc29c6c705acc383c8fb0f5253b61acc9a43bc51763647"} Feb 02 12:25:00 crc kubenswrapper[4925]: I0202 12:25:00.870983 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4qc2" event={"ID":"3830ab74-f43b-4887-9389-ca6b38d2a09d","Type":"ContainerStarted","Data":"5d999ac37400d1e85c783406d19f270e4c38ac799cee1f8b4d8e24b6211ee8bc"} Feb 02 12:25:00 crc kubenswrapper[4925]: I0202 12:25:00.893237 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p4qc2" podStartSLOduration=2.372515931 podStartE2EDuration="4.893220454s" podCreationTimestamp="2026-02-02 12:24:56 +0000 UTC" firstStartedPulling="2026-02-02 12:24:57.828677161 +0000 UTC m=+5274.832926123" lastFinishedPulling="2026-02-02 12:25:00.349381684 +0000 UTC m=+5277.353630646" observedRunningTime="2026-02-02 12:25:00.888538708 +0000 UTC m=+5277.892787680" watchObservedRunningTime="2026-02-02 12:25:00.893220454 +0000 UTC m=+5277.897469416" Feb 02 12:25:02 crc kubenswrapper[4925]: I0202 12:25:02.664927 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:25:02 crc kubenswrapper[4925]: E0202 12:25:02.665501 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:25:06 crc kubenswrapper[4925]: I0202 12:25:06.395766 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p4qc2" Feb 02 12:25:06 crc kubenswrapper[4925]: I0202 12:25:06.396349 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p4qc2" Feb 02 12:25:06 crc kubenswrapper[4925]: I0202 12:25:06.464207 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p4qc2" Feb 02 12:25:06 crc kubenswrapper[4925]: I0202 12:25:06.987028 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p4qc2" Feb 02 12:25:08 crc kubenswrapper[4925]: I0202 12:25:08.468419 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p4qc2"] Feb 02 12:25:08 crc kubenswrapper[4925]: I0202 12:25:08.940820 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p4qc2" podUID="3830ab74-f43b-4887-9389-ca6b38d2a09d" containerName="registry-server" containerID="cri-o://5d999ac37400d1e85c783406d19f270e4c38ac799cee1f8b4d8e24b6211ee8bc" gracePeriod=2 Feb 02 12:25:09 crc kubenswrapper[4925]: I0202 12:25:09.402416 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4qc2" Feb 02 12:25:09 crc kubenswrapper[4925]: I0202 12:25:09.476255 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3830ab74-f43b-4887-9389-ca6b38d2a09d-utilities\") pod \"3830ab74-f43b-4887-9389-ca6b38d2a09d\" (UID: \"3830ab74-f43b-4887-9389-ca6b38d2a09d\") " Feb 02 12:25:09 crc kubenswrapper[4925]: I0202 12:25:09.476335 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3830ab74-f43b-4887-9389-ca6b38d2a09d-catalog-content\") pod \"3830ab74-f43b-4887-9389-ca6b38d2a09d\" (UID: \"3830ab74-f43b-4887-9389-ca6b38d2a09d\") " Feb 02 12:25:09 crc kubenswrapper[4925]: I0202 12:25:09.476393 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5gd7\" (UniqueName: \"kubernetes.io/projected/3830ab74-f43b-4887-9389-ca6b38d2a09d-kube-api-access-l5gd7\") pod \"3830ab74-f43b-4887-9389-ca6b38d2a09d\" (UID: \"3830ab74-f43b-4887-9389-ca6b38d2a09d\") " Feb 02 12:25:09 crc kubenswrapper[4925]: I0202 12:25:09.477566 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3830ab74-f43b-4887-9389-ca6b38d2a09d-utilities" (OuterVolumeSpecName: "utilities") pod "3830ab74-f43b-4887-9389-ca6b38d2a09d" (UID: "3830ab74-f43b-4887-9389-ca6b38d2a09d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:25:09 crc kubenswrapper[4925]: I0202 12:25:09.481590 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3830ab74-f43b-4887-9389-ca6b38d2a09d-kube-api-access-l5gd7" (OuterVolumeSpecName: "kube-api-access-l5gd7") pod "3830ab74-f43b-4887-9389-ca6b38d2a09d" (UID: "3830ab74-f43b-4887-9389-ca6b38d2a09d"). InnerVolumeSpecName "kube-api-access-l5gd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:25:09 crc kubenswrapper[4925]: I0202 12:25:09.537305 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3830ab74-f43b-4887-9389-ca6b38d2a09d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3830ab74-f43b-4887-9389-ca6b38d2a09d" (UID: "3830ab74-f43b-4887-9389-ca6b38d2a09d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:25:09 crc kubenswrapper[4925]: I0202 12:25:09.578992 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3830ab74-f43b-4887-9389-ca6b38d2a09d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 12:25:09 crc kubenswrapper[4925]: I0202 12:25:09.579032 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3830ab74-f43b-4887-9389-ca6b38d2a09d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 12:25:09 crc kubenswrapper[4925]: I0202 12:25:09.579043 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5gd7\" (UniqueName: \"kubernetes.io/projected/3830ab74-f43b-4887-9389-ca6b38d2a09d-kube-api-access-l5gd7\") on node \"crc\" DevicePath \"\"" Feb 02 12:25:09 crc kubenswrapper[4925]: I0202 12:25:09.952901 4925 generic.go:334] "Generic (PLEG): container finished" podID="3830ab74-f43b-4887-9389-ca6b38d2a09d" containerID="5d999ac37400d1e85c783406d19f270e4c38ac799cee1f8b4d8e24b6211ee8bc" exitCode=0 Feb 02 12:25:09 crc kubenswrapper[4925]: I0202 12:25:09.952949 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4qc2" event={"ID":"3830ab74-f43b-4887-9389-ca6b38d2a09d","Type":"ContainerDied","Data":"5d999ac37400d1e85c783406d19f270e4c38ac799cee1f8b4d8e24b6211ee8bc"} Feb 02 12:25:09 crc kubenswrapper[4925]: I0202 12:25:09.952972 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4qc2" Feb 02 12:25:09 crc kubenswrapper[4925]: I0202 12:25:09.952994 4925 scope.go:117] "RemoveContainer" containerID="5d999ac37400d1e85c783406d19f270e4c38ac799cee1f8b4d8e24b6211ee8bc" Feb 02 12:25:09 crc kubenswrapper[4925]: I0202 12:25:09.952980 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4qc2" event={"ID":"3830ab74-f43b-4887-9389-ca6b38d2a09d","Type":"ContainerDied","Data":"96683852a13129fd1ba56d189f65d9540deee900fbba373f92f496c4db0b2f2c"} Feb 02 12:25:09 crc kubenswrapper[4925]: I0202 12:25:09.977626 4925 scope.go:117] "RemoveContainer" containerID="2bacfeece552368b64cc29c6c705acc383c8fb0f5253b61acc9a43bc51763647" Feb 02 12:25:10 crc kubenswrapper[4925]: I0202 12:25:10.001186 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p4qc2"] Feb 02 12:25:10 crc kubenswrapper[4925]: I0202 12:25:10.006690 4925 scope.go:117] "RemoveContainer" containerID="e07af7459b4cff9b272d8de1b73e0faf46a3e26bba74a2e663a69f57a35ff862" Feb 02 12:25:10 crc kubenswrapper[4925]: I0202 12:25:10.007639 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p4qc2"] Feb 02 12:25:10 crc kubenswrapper[4925]: I0202 12:25:10.041646 4925 scope.go:117] "RemoveContainer" containerID="5d999ac37400d1e85c783406d19f270e4c38ac799cee1f8b4d8e24b6211ee8bc" Feb 02 12:25:10 crc kubenswrapper[4925]: E0202 12:25:10.042107 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d999ac37400d1e85c783406d19f270e4c38ac799cee1f8b4d8e24b6211ee8bc\": container with ID starting with 5d999ac37400d1e85c783406d19f270e4c38ac799cee1f8b4d8e24b6211ee8bc not found: ID does not exist" containerID="5d999ac37400d1e85c783406d19f270e4c38ac799cee1f8b4d8e24b6211ee8bc" Feb 02 12:25:10 crc kubenswrapper[4925]: I0202 12:25:10.042147 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d999ac37400d1e85c783406d19f270e4c38ac799cee1f8b4d8e24b6211ee8bc"} err="failed to get container status \"5d999ac37400d1e85c783406d19f270e4c38ac799cee1f8b4d8e24b6211ee8bc\": rpc error: code = NotFound desc = could not find container \"5d999ac37400d1e85c783406d19f270e4c38ac799cee1f8b4d8e24b6211ee8bc\": container with ID starting with 5d999ac37400d1e85c783406d19f270e4c38ac799cee1f8b4d8e24b6211ee8bc not found: ID does not exist" Feb 02 12:25:10 crc kubenswrapper[4925]: I0202 12:25:10.042173 4925 scope.go:117] "RemoveContainer" containerID="2bacfeece552368b64cc29c6c705acc383c8fb0f5253b61acc9a43bc51763647" Feb 02 12:25:10 crc kubenswrapper[4925]: E0202 12:25:10.042549 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bacfeece552368b64cc29c6c705acc383c8fb0f5253b61acc9a43bc51763647\": container with ID starting with 2bacfeece552368b64cc29c6c705acc383c8fb0f5253b61acc9a43bc51763647 not found: ID does not exist" containerID="2bacfeece552368b64cc29c6c705acc383c8fb0f5253b61acc9a43bc51763647" Feb 02 12:25:10 crc kubenswrapper[4925]: I0202 12:25:10.042583 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bacfeece552368b64cc29c6c705acc383c8fb0f5253b61acc9a43bc51763647"} err="failed to get container status \"2bacfeece552368b64cc29c6c705acc383c8fb0f5253b61acc9a43bc51763647\": rpc error: code = NotFound desc = could not find container \"2bacfeece552368b64cc29c6c705acc383c8fb0f5253b61acc9a43bc51763647\": container with ID starting with 2bacfeece552368b64cc29c6c705acc383c8fb0f5253b61acc9a43bc51763647 not found: ID does not exist" Feb 02 12:25:10 crc kubenswrapper[4925]: I0202 12:25:10.042598 4925 scope.go:117] "RemoveContainer" containerID="e07af7459b4cff9b272d8de1b73e0faf46a3e26bba74a2e663a69f57a35ff862" Feb 02 12:25:10 crc kubenswrapper[4925]: E0202 12:25:10.042912 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e07af7459b4cff9b272d8de1b73e0faf46a3e26bba74a2e663a69f57a35ff862\": container with ID starting with e07af7459b4cff9b272d8de1b73e0faf46a3e26bba74a2e663a69f57a35ff862 not found: ID does not exist" containerID="e07af7459b4cff9b272d8de1b73e0faf46a3e26bba74a2e663a69f57a35ff862" Feb 02 12:25:10 crc kubenswrapper[4925]: I0202 12:25:10.042947 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e07af7459b4cff9b272d8de1b73e0faf46a3e26bba74a2e663a69f57a35ff862"} err="failed to get container status \"e07af7459b4cff9b272d8de1b73e0faf46a3e26bba74a2e663a69f57a35ff862\": rpc error: code = NotFound desc = could not find container \"e07af7459b4cff9b272d8de1b73e0faf46a3e26bba74a2e663a69f57a35ff862\": container with ID starting with e07af7459b4cff9b272d8de1b73e0faf46a3e26bba74a2e663a69f57a35ff862 not found: ID does not exist" Feb 02 12:25:10 crc kubenswrapper[4925]: I0202 12:25:10.674190 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3830ab74-f43b-4887-9389-ca6b38d2a09d" path="/var/lib/kubelet/pods/3830ab74-f43b-4887-9389-ca6b38d2a09d/volumes" Feb 02 12:25:17 crc kubenswrapper[4925]: I0202 12:25:17.664228 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:25:17 crc kubenswrapper[4925]: E0202 12:25:17.665007 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:25:28 crc kubenswrapper[4925]: I0202 12:25:28.664621 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:25:28 crc kubenswrapper[4925]: E0202 12:25:28.665601 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:25:40 crc kubenswrapper[4925]: I0202 12:25:40.669004 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:25:40 crc kubenswrapper[4925]: E0202 12:25:40.669752 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:25:51 crc kubenswrapper[4925]: I0202 12:25:51.615584 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8697ffdb94-2bnsl_38619cef-521e-4e12-9919-8846bed56c10/barbican-api/0.log" Feb 02 12:25:51 crc kubenswrapper[4925]: I0202 12:25:51.763396 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8697ffdb94-2bnsl_38619cef-521e-4e12-9919-8846bed56c10/barbican-api-log/0.log" Feb 02 12:25:51 crc kubenswrapper[4925]: I0202 12:25:51.870095 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cb85bfdc6-wzdz4_461effdf-7e6d-47d3-85f8-eac7940d2100/barbican-keystone-listener/0.log" Feb 02 12:25:52 crc kubenswrapper[4925]: I0202 12:25:52.044988 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7f574dbb79-fc5vn_604a4d9b-a323-464c-b7f4-e41503e992f4/barbican-worker/0.log" Feb 02 12:25:52 crc kubenswrapper[4925]: I0202 12:25:52.122261 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7f574dbb79-fc5vn_604a4d9b-a323-464c-b7f4-e41503e992f4/barbican-worker-log/0.log" Feb 02 12:25:52 crc kubenswrapper[4925]: I0202 12:25:52.132831 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cb85bfdc6-wzdz4_461effdf-7e6d-47d3-85f8-eac7940d2100/barbican-keystone-listener-log/0.log" Feb 02 12:25:52 crc kubenswrapper[4925]: I0202 12:25:52.299744 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-pcg6w_4a342fe3-c33f-4a54-a59f-9bba07acc904/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:25:52 crc kubenswrapper[4925]: I0202 12:25:52.393911 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8803b0fe-f2e6-41bd-b2e8-b970178ff360/ceilometer-central-agent/0.log" Feb 02 12:25:52 crc kubenswrapper[4925]: I0202 12:25:52.531755 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8803b0fe-f2e6-41bd-b2e8-b970178ff360/ceilometer-notification-agent/0.log" Feb 02 12:25:52 crc kubenswrapper[4925]: I0202 12:25:52.566256 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8803b0fe-f2e6-41bd-b2e8-b970178ff360/proxy-httpd/0.log" Feb 02 12:25:52 crc kubenswrapper[4925]: I0202 12:25:52.613064 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8803b0fe-f2e6-41bd-b2e8-b970178ff360/sg-core/0.log" Feb 02 12:25:52 crc kubenswrapper[4925]: I0202 12:25:52.748701 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-fk9kr_c87ae68d-67eb-45b4-8971-5d5d14d6c36b/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:25:52 crc kubenswrapper[4925]: I0202 12:25:52.837858 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cblx2_600dd95b-ee69-45e7-918b-85650f9e2980/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:25:53 crc kubenswrapper[4925]: I0202 12:25:53.410876 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_161c8104-b092-42f2-8e76-513b0e7991d6/probe/0.log" Feb 02 12:25:53 crc kubenswrapper[4925]: I0202 12:25:53.604655 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_314e8cb6-036e-4365-9056-026caca906f1/cinder-api/0.log" Feb 02 12:25:53 crc kubenswrapper[4925]: I0202 12:25:53.732716 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_314e8cb6-036e-4365-9056-026caca906f1/cinder-api-log/0.log" Feb 02 12:25:53 crc kubenswrapper[4925]: I0202 12:25:53.908672 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2a0d1352-9215-4b6e-831a-d9d654cc8a1e/cinder-scheduler/0.log" Feb 02 12:25:53 crc kubenswrapper[4925]: I0202 12:25:53.949517 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2a0d1352-9215-4b6e-831a-d9d654cc8a1e/probe/0.log" Feb 02 12:25:54 crc kubenswrapper[4925]: I0202 12:25:54.310526 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_0e7f2d02-9fa4-4e06-a6ae-77c1390e574b/probe/0.log" Feb 02 12:25:54 crc kubenswrapper[4925]: I0202 12:25:54.586420 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-wzrrg_34087aed-542d-424c-a71e-a277cf32d94c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:25:54 crc kubenswrapper[4925]: I0202 12:25:54.651502 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_161c8104-b092-42f2-8e76-513b0e7991d6/cinder-backup/0.log" Feb 02 12:25:54 crc kubenswrapper[4925]: I0202 12:25:54.856593 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-csw6t_a07f6a0e-2ed8-4213-be0f-ed8ae1005a14/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:25:54 crc kubenswrapper[4925]: I0202 12:25:54.868269 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-z5snj_151bbe9a-f79f-475b-88ad-1337e6ec9312/init/0.log" Feb 02 12:25:55 crc kubenswrapper[4925]: I0202 12:25:55.261285 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-z5snj_151bbe9a-f79f-475b-88ad-1337e6ec9312/init/0.log" Feb 02 12:25:55 crc kubenswrapper[4925]: I0202 12:25:55.352514 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-z5snj_151bbe9a-f79f-475b-88ad-1337e6ec9312/dnsmasq-dns/0.log" Feb 02 12:25:55 crc kubenswrapper[4925]: I0202 12:25:55.664993 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:25:55 crc kubenswrapper[4925]: E0202 12:25:55.665370 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:25:55 crc kubenswrapper[4925]: I0202 12:25:55.669381 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_29d38bf8-6523-4fe2-9fb9-7385f5ea31bf/glance-log/0.log" Feb 02 12:25:55 crc kubenswrapper[4925]: I0202 12:25:55.709044 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_29d38bf8-6523-4fe2-9fb9-7385f5ea31bf/glance-httpd/0.log" Feb 02 12:25:55 crc kubenswrapper[4925]: I0202 12:25:55.774487 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_99d0cf5b-0a90-49c5-8302-4401070f1c3c/glance-httpd/0.log" Feb 02 12:25:55 crc kubenswrapper[4925]: I0202 12:25:55.858584 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_99d0cf5b-0a90-49c5-8302-4401070f1c3c/glance-log/0.log" Feb 02 12:25:56 crc kubenswrapper[4925]: I0202 12:25:56.199597 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-7ftcg_749615c6-2bdb-4b47-aced-b8dcb3041df6/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:25:56 crc kubenswrapper[4925]: I0202 12:25:56.213691 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-c6d58558b-gh6c8_acc24fd1-e3f5-4235-9190-c9aad51e4282/horizon/0.log" Feb 02 12:25:56 crc kubenswrapper[4925]: I0202 12:25:56.459955 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-pp5mc_ef8d17fd-9d76-4856-9308-9d7630003827/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:25:56 crc kubenswrapper[4925]: I0202 12:25:56.462103 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-c6d58558b-gh6c8_acc24fd1-e3f5-4235-9190-c9aad51e4282/horizon-log/0.log" Feb 02 12:25:56 crc kubenswrapper[4925]: I0202 12:25:56.745520 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29500561-khb48_90190a5e-2227-47b8-83e8-4f3f26891a14/keystone-cron/0.log" Feb 02 12:25:56 crc kubenswrapper[4925]: I0202 12:25:56.948582 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7bc4ffdd-b5f2-40e7-9c73-0c5efb6ee28f/kube-state-metrics/0.log" Feb 02 12:25:57 crc kubenswrapper[4925]: I0202 12:25:57.085421 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-b5rct_8e448d5d-ae77-439d-804b-eb4bea2a957d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:25:57 crc kubenswrapper[4925]: I0202 12:25:57.333488 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e/manila-api-log/0.log" Feb 02 12:25:57 crc kubenswrapper[4925]: I0202 12:25:57.461881 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7878d757f7-z5tzg_c10f0dec-2709-40e9-90ce-ad8698d98599/keystone-api/0.log" Feb 02 12:25:57 crc kubenswrapper[4925]: I0202 12:25:57.545242 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_6f7f1ca0-aa5c-48c8-82f1-5d0ef0f1e66e/manila-api/0.log" Feb 02 12:25:57 crc kubenswrapper[4925]: I0202 12:25:57.661556 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_4a002372-3866-4b2a-8f00-f5ae284f9e62/probe/0.log" Feb 02 12:25:57 crc kubenswrapper[4925]: I0202 12:25:57.733972 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_4a002372-3866-4b2a-8f00-f5ae284f9e62/manila-scheduler/0.log" Feb 02 12:25:57 crc kubenswrapper[4925]: I0202 12:25:57.872163 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_b6f74fd7-2cf3-4fc7-9535-50503f677c96/manila-share/0.log" Feb 02 12:25:57 crc kubenswrapper[4925]: I0202 12:25:57.896299 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_b6f74fd7-2cf3-4fc7-9535-50503f677c96/probe/0.log" Feb 02 12:25:58 crc kubenswrapper[4925]: I0202 12:25:58.445617 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7755c4bbbc-qkg7f_66b56382-6514-4567-8b82-42454f43f8d1/neutron-httpd/0.log" Feb 02 12:25:58 crc kubenswrapper[4925]: I0202 12:25:58.534151 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7755c4bbbc-qkg7f_66b56382-6514-4567-8b82-42454f43f8d1/neutron-api/0.log" Feb 02 12:25:58 crc kubenswrapper[4925]: I0202 12:25:58.585948 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-wjwds_865424ac-9ae9-45a6-9f69-b239f8d3d746/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:25:59 crc kubenswrapper[4925]: I0202 12:25:59.646384 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_8ee05daf-c232-40b5-a40f-1c8a6a3a2f7c/nova-cell0-conductor-conductor/0.log" Feb 02 12:25:59 crc kubenswrapper[4925]: I0202 12:25:59.680154 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2526e2c5-e58e-4e8a-b55d-ec5d06a490d1/nova-api-log/0.log" Feb 02 12:26:00 crc kubenswrapper[4925]: I0202 12:26:00.105523 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_be99b255-6467-42af-bb3b-4e6d05fccc64/nova-cell1-conductor-conductor/0.log" Feb 02 12:26:00 crc kubenswrapper[4925]: I0202 12:26:00.124726 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2526e2c5-e58e-4e8a-b55d-ec5d06a490d1/nova-api-api/0.log" Feb 02 12:26:00 crc kubenswrapper[4925]: I0202 12:26:00.427601 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-kkxqd_374c9a22-b870-43ee-a27a-499a0d607e32/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:26:00 crc kubenswrapper[4925]: I0202 12:26:00.468543 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f7ad506b-3504-4825-9ae1-94937ca48d1a/nova-cell1-novncproxy-novncproxy/0.log" Feb 02 12:26:00 crc kubenswrapper[4925]: I0202 12:26:00.821897 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3287d1a5-371d-44d3-a215-6937bf4da1a1/nova-metadata-log/0.log" Feb 02 12:26:01 crc kubenswrapper[4925]: I0202 12:26:01.340818 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_64d4545e-f93a-4767-bba7-d01bcaf43c4f/mysql-bootstrap/0.log" Feb 02 12:26:01 crc kubenswrapper[4925]: I0202 12:26:01.436681 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_54598a97-2180-4fe5-a267-970c64919ba0/nova-scheduler-scheduler/0.log" Feb 02 12:26:01 crc kubenswrapper[4925]: I0202 12:26:01.592524 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_64d4545e-f93a-4767-bba7-d01bcaf43c4f/mysql-bootstrap/0.log" Feb 02 12:26:01 crc kubenswrapper[4925]: I0202 12:26:01.674283 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_64d4545e-f93a-4767-bba7-d01bcaf43c4f/galera/0.log" Feb 02 12:26:01 crc kubenswrapper[4925]: I0202 12:26:01.868649 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d99509bd-1ed8-4516-8ed2-8d99b8e33c67/mysql-bootstrap/0.log" Feb 02 12:26:02 crc kubenswrapper[4925]: I0202 12:26:02.124321 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d99509bd-1ed8-4516-8ed2-8d99b8e33c67/galera/0.log" Feb 02 12:26:02 crc kubenswrapper[4925]: I0202 12:26:02.132130 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d99509bd-1ed8-4516-8ed2-8d99b8e33c67/mysql-bootstrap/0.log" Feb 02 12:26:02 crc kubenswrapper[4925]: I0202 12:26:02.322888 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_515789df-211a-4465-8f1f-5ab3dadcb813/openstackclient/0.log" Feb 02 12:26:02 crc kubenswrapper[4925]: I0202 12:26:02.563745 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gr5rf_feb2b36a-609f-4805-8b50-fe0731522375/ovn-controller/0.log" Feb 02 12:26:02 crc kubenswrapper[4925]: I0202 12:26:02.764893 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-r55bh_952fc6ba-02b5-4a94-90b4-2a206213f818/openstack-network-exporter/0.log" Feb 02 12:26:03 crc kubenswrapper[4925]: I0202 12:26:03.046127 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-26w5s_d118fb79-debc-4d5d-b390-38f913681237/ovsdb-server-init/0.log" Feb 02 12:26:03 crc kubenswrapper[4925]: I0202 12:26:03.173323 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3287d1a5-371d-44d3-a215-6937bf4da1a1/nova-metadata-metadata/0.log" Feb 02 12:26:03 crc kubenswrapper[4925]: I0202 12:26:03.232730 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_0e7f2d02-9fa4-4e06-a6ae-77c1390e574b/cinder-volume/0.log" Feb 02 12:26:03 crc kubenswrapper[4925]: I0202 12:26:03.498614 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-26w5s_d118fb79-debc-4d5d-b390-38f913681237/ovsdb-server-init/0.log" Feb 02 12:26:03 crc kubenswrapper[4925]: I0202 12:26:03.511860 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-26w5s_d118fb79-debc-4d5d-b390-38f913681237/ovs-vswitchd/0.log" Feb 02 12:26:03 crc kubenswrapper[4925]: I0202 12:26:03.652238 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-26w5s_d118fb79-debc-4d5d-b390-38f913681237/ovsdb-server/0.log" Feb 02 12:26:03 crc kubenswrapper[4925]: I0202 12:26:03.793582 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-sv6hb_9b644239-1d8a-4dd1-96ab-6125f8ccb4e2/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:26:03 crc kubenswrapper[4925]: I0202 12:26:03.814293 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3b370a7b-282f-4481-8275-39c981b54f35/openstack-network-exporter/0.log" Feb 02 12:26:03 crc kubenswrapper[4925]: I0202 12:26:03.928591 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3b370a7b-282f-4481-8275-39c981b54f35/ovn-northd/0.log" Feb 02 12:26:04 crc kubenswrapper[4925]: I0202 12:26:04.053392 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0798cb6a-03c0-467e-b65f-05612b9213d3/openstack-network-exporter/0.log" Feb 02 12:26:04 crc kubenswrapper[4925]: I0202 12:26:04.118553 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0798cb6a-03c0-467e-b65f-05612b9213d3/ovsdbserver-nb/0.log" Feb 02 12:26:04 crc kubenswrapper[4925]: I0202 12:26:04.271694 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ab8a8eaa-8f11-490e-9251-e4d34b8c481b/ovsdbserver-sb/0.log" Feb 02 12:26:04 crc kubenswrapper[4925]: I0202 12:26:04.330368 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ab8a8eaa-8f11-490e-9251-e4d34b8c481b/openstack-network-exporter/0.log" Feb 02 12:26:04 crc kubenswrapper[4925]: I0202 12:26:04.547513 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-596c688466-nwnv5_97b44970-d770-46b9-9c10-a8ec03d3bbaf/placement-api/0.log" Feb 02 12:26:04 crc kubenswrapper[4925]: I0202 12:26:04.661309 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-596c688466-nwnv5_97b44970-d770-46b9-9c10-a8ec03d3bbaf/placement-log/0.log" Feb 02 12:26:04 crc kubenswrapper[4925]: I0202 12:26:04.718838 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f584c201-5eae-46d6-a9c1-b360f5506d24/setup-container/0.log" Feb 02 12:26:04 crc kubenswrapper[4925]: I0202 12:26:04.843423 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f584c201-5eae-46d6-a9c1-b360f5506d24/setup-container/0.log" Feb 02 12:26:04 crc kubenswrapper[4925]: I0202 12:26:04.914192 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f584c201-5eae-46d6-a9c1-b360f5506d24/rabbitmq/0.log" Feb 02 12:26:04 crc kubenswrapper[4925]: I0202 12:26:04.983695 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f988cc52-4086-4387-971c-ecd4837c512c/setup-container/0.log" Feb 02 12:26:05 crc kubenswrapper[4925]: I0202 12:26:05.149975 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f988cc52-4086-4387-971c-ecd4837c512c/setup-container/0.log" Feb 02 12:26:05 crc kubenswrapper[4925]: I0202 12:26:05.186548 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f988cc52-4086-4387-971c-ecd4837c512c/rabbitmq/0.log" Feb 02 12:26:05 crc kubenswrapper[4925]: I0202 12:26:05.238610 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-x9zpx_4c793349-e8e5-419c-9e2c-4d3e4dd7500c/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:26:05 crc kubenswrapper[4925]: I0202 12:26:05.414425 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-gxb9q_92c7fc53-ac73-4641-90de-b290231ea6a9/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:26:05 crc kubenswrapper[4925]: I0202 12:26:05.459100 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-rdnbg_dd7acac2-73fe-4a28-853a-8455a8b7ddcc/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:26:05 crc kubenswrapper[4925]: I0202 12:26:05.643034 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-nl7n4_11befe83-a359-400c-b072-1778f7c29f74/ssh-known-hosts-edpm-deployment/0.log" Feb 02 12:26:05 crc kubenswrapper[4925]: I0202 12:26:05.772608 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_7390b503-a9bf-41e3-9506-1f63b8ad6d7d/tempest-tests-tempest-tests-runner/0.log" Feb 02 12:26:05 crc kubenswrapper[4925]: I0202 12:26:05.908045 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_fdd0c34b-7157-4648-ae8e-de13e12bcaed/test-operator-logs-container/0.log" Feb 02 12:26:06 crc kubenswrapper[4925]: I0202 12:26:06.006561 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-zb42p_1d4b3b51-6672-4310-92e9-5a5c88c192ba/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 12:26:09 crc kubenswrapper[4925]: I0202 12:26:09.664191 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:26:09 crc kubenswrapper[4925]: E0202 12:26:09.664939 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:26:15 crc kubenswrapper[4925]: I0202 12:26:15.539569 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_6bf67c7c-0e93-499e-9530-735520afac74/memcached/0.log" Feb 02 12:26:20 crc kubenswrapper[4925]: I0202 12:26:20.664785 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:26:20 crc kubenswrapper[4925]: E0202 12:26:20.665805 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:26:30 crc kubenswrapper[4925]: I0202 12:26:30.884354 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr_d2b04846-e7f8-4fe5-9878-8cc586d96b5a/util/0.log" Feb 02 12:26:31 crc kubenswrapper[4925]: I0202 12:26:31.086889 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr_d2b04846-e7f8-4fe5-9878-8cc586d96b5a/util/0.log" Feb 02 12:26:31 crc kubenswrapper[4925]: I0202 12:26:31.097906 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr_d2b04846-e7f8-4fe5-9878-8cc586d96b5a/pull/0.log" Feb 02 12:26:31 crc kubenswrapper[4925]: I0202 12:26:31.120560 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr_d2b04846-e7f8-4fe5-9878-8cc586d96b5a/pull/0.log" Feb 02 12:26:31 crc kubenswrapper[4925]: I0202 12:26:31.279890 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr_d2b04846-e7f8-4fe5-9878-8cc586d96b5a/pull/0.log" Feb 02 12:26:31 crc kubenswrapper[4925]: I0202 12:26:31.287187 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr_d2b04846-e7f8-4fe5-9878-8cc586d96b5a/util/0.log" Feb 02 12:26:31 crc kubenswrapper[4925]: I0202 12:26:31.345472 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ac39d11ce002c10f3c3d4de51b104fc70fb2ca8ad01159ca7d553d2185nnjhr_d2b04846-e7f8-4fe5-9878-8cc586d96b5a/extract/0.log" Feb 02 12:26:31 crc kubenswrapper[4925]: I0202 12:26:31.487649 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-5qxgq_271532e8-0b2a-40bc-b982-56e6c0c706dc/manager/0.log" Feb 02 12:26:31 crc kubenswrapper[4925]: I0202 12:26:31.709380 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-zvg88_6f64f1b5-8b8f-48b6-934c-5d148565b151/manager/0.log" Feb 02 12:26:31 crc kubenswrapper[4925]: I0202 12:26:31.882875 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-swkbc_e6ccf8c1-dcaf-49c7-84d9-dada6d7fec73/manager/0.log" Feb 02 12:26:31 crc kubenswrapper[4925]: I0202 12:26:31.959804 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-wggcm_2670eaa9-d6c1-479d-98d1-9a86c0c09305/manager/0.log" Feb 02 12:26:32 crc kubenswrapper[4925]: I0202 12:26:32.125679 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-mfxvn_8405a39c-7526-47b8-93b8-b9bb03cb970b/manager/0.log" Feb 02 12:26:32 crc kubenswrapper[4925]: I0202 12:26:32.419828 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-fgf8c_714728e3-dda9-47d3-aca5-c9bf8a13c2eb/manager/0.log" Feb 02 12:26:32 crc kubenswrapper[4925]: I0202 12:26:32.746064 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-m9rb5_9b6aadaa-89ca-46f2-bf48-59726671b789/manager/0.log" Feb 02 12:26:32 crc kubenswrapper[4925]: I0202 12:26:32.779789 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-kbc5t_a8a71810-ebcf-4908-8e41-73fdce287188/manager/0.log" Feb 02 12:26:33 crc kubenswrapper[4925]: I0202 12:26:33.014012 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-f9rbf_6db50ed1-76a9-48ad-b08e-07edd9d07421/manager/0.log" Feb 02 12:26:33 crc kubenswrapper[4925]: I0202 12:26:33.063755 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-v4m7x_2d3514fc-34cd-4021-a4d9-662abe6bb56e/manager/0.log" Feb 02 12:26:33 crc kubenswrapper[4925]: I0202 12:26:33.310763 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-8nf8m_7b8e50f8-9611-4be4-aa4e-a0834ec27a24/manager/0.log" Feb 02 12:26:33 crc kubenswrapper[4925]: I0202 12:26:33.352349 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-56b8d567c6-9sb76_057f6b87-28a7-46c6-8d51-c32937d77a6a/manager/0.log" Feb 02 12:26:33 crc kubenswrapper[4925]: I0202 12:26:33.503052 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-bfkmp_252fe85c-1645-4a4b-bd66-efe5814e9b09/manager/0.log" Feb 02 12:26:33 crc kubenswrapper[4925]: I0202 12:26:33.522098 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-zksqs_85d89138-ff2c-4e69-bd55-bf6b2648d286/manager/0.log" Feb 02 12:26:33 crc kubenswrapper[4925]: I0202 12:26:33.697657 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dd6bs8_a4e64115-b62c-421f-8072-88fc52eef59e/manager/0.log" Feb 02 12:26:33 crc kubenswrapper[4925]: I0202 12:26:33.839598 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7bfc86c845-8crkz_f0498a78-8295-4910-bf25-61219ef0105c/operator/0.log" Feb 02 12:26:34 crc kubenswrapper[4925]: I0202 12:26:34.089565 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-grspq_f66c6d9e-dc09-4ffc-af2b-672b8406c132/registry-server/0.log" Feb 02 12:26:34 crc kubenswrapper[4925]: I0202 12:26:34.328477 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-zrg4p_88bf0458-e0ab-4b1b-ad4d-01e0f51780e8/manager/0.log" Feb 02 12:26:34 crc kubenswrapper[4925]: I0202 12:26:34.408259 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-5rz7t_e11ef3f5-cbad-483b-a5a6-dedfb5ec556f/manager/0.log" Feb 02 12:26:34 crc kubenswrapper[4925]: I0202 12:26:34.672047 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vw6m6_21d85aaf-29ca-4cc9-8831-bb5691bc29d9/operator/0.log" Feb 02 12:26:34 crc kubenswrapper[4925]: I0202 12:26:34.806876 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-4lhnh_fc69d485-23dc-4c0c-88ef-9fc6729d977d/manager/0.log" Feb 02 12:26:35 crc kubenswrapper[4925]: I0202 12:26:35.023007 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-k579v_ae37dc52-0e8c-41b3-9c07-7ce321c5e2a0/manager/0.log" Feb 02 12:26:35 crc kubenswrapper[4925]: I0202 12:26:35.123599 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-8mpnq_07bdcdf5-a330-4524-9695-d089c2fbd4ae/manager/0.log" Feb 02 12:26:35 crc kubenswrapper[4925]: I0202 12:26:35.183225 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5d4f579c97-rrqkc_7112b3b6-a74c-4a93-94a2-8cbdbfd960b0/manager/0.log" Feb 02 12:26:35 crc kubenswrapper[4925]: I0202 12:26:35.233990 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-gbm72_2ce3d469-8592-45c6-aba0-f1a607694c6d/manager/0.log" Feb 02 12:26:35 crc kubenswrapper[4925]: I0202 12:26:35.664715 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:26:35 crc kubenswrapper[4925]: E0202 12:26:35.665021 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:26:48 crc kubenswrapper[4925]: I0202 12:26:48.664822 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:26:48 crc kubenswrapper[4925]: E0202 12:26:48.665611 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:26:53 crc kubenswrapper[4925]: I0202 12:26:53.389085 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-ghpq7_693d8818-a349-4e21-80cd-26caca3271b5/control-plane-machine-set-operator/0.log" Feb 02 12:26:53 crc kubenswrapper[4925]: I0202 12:26:53.600438 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cmf26_5da7ca31-35e0-47b3-a877-63d50ed68d70/kube-rbac-proxy/0.log" Feb 02 12:26:53 crc kubenswrapper[4925]: I0202 12:26:53.608410 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cmf26_5da7ca31-35e0-47b3-a877-63d50ed68d70/machine-api-operator/0.log" Feb 02 12:27:02 crc kubenswrapper[4925]: I0202 12:27:02.664678 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:27:02 crc kubenswrapper[4925]: E0202 12:27:02.665478 4925 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fphfd_openshift-machine-config-operator(08797ee8-d3b4-4eed-8482-c19a5b6b87c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.003553 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mn597"] Feb 02 12:27:05 crc kubenswrapper[4925]: E0202 12:27:05.005662 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3830ab74-f43b-4887-9389-ca6b38d2a09d" containerName="extract-utilities" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.005687 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="3830ab74-f43b-4887-9389-ca6b38d2a09d" containerName="extract-utilities" Feb 02 12:27:05 crc kubenswrapper[4925]: E0202 12:27:05.005713 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3830ab74-f43b-4887-9389-ca6b38d2a09d" containerName="registry-server" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.005721 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="3830ab74-f43b-4887-9389-ca6b38d2a09d" containerName="registry-server" Feb 02 12:27:05 crc kubenswrapper[4925]: E0202 12:27:05.005959 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3830ab74-f43b-4887-9389-ca6b38d2a09d" containerName="extract-content" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.005975 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="3830ab74-f43b-4887-9389-ca6b38d2a09d" containerName="extract-content" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.006974 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="3830ab74-f43b-4887-9389-ca6b38d2a09d" containerName="registry-server" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.019657 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mn597" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.027776 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn597"] Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.181247 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f39d0e79-a252-4727-b8e7-41a566b49234-catalog-content\") pod \"redhat-marketplace-mn597\" (UID: \"f39d0e79-a252-4727-b8e7-41a566b49234\") " pod="openshift-marketplace/redhat-marketplace-mn597" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.181359 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f39d0e79-a252-4727-b8e7-41a566b49234-utilities\") pod \"redhat-marketplace-mn597\" (UID: \"f39d0e79-a252-4727-b8e7-41a566b49234\") " pod="openshift-marketplace/redhat-marketplace-mn597" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.181426 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlncf\" (UniqueName: \"kubernetes.io/projected/f39d0e79-a252-4727-b8e7-41a566b49234-kube-api-access-qlncf\") pod \"redhat-marketplace-mn597\" (UID: \"f39d0e79-a252-4727-b8e7-41a566b49234\") " pod="openshift-marketplace/redhat-marketplace-mn597" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.283536 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f39d0e79-a252-4727-b8e7-41a566b49234-catalog-content\") pod \"redhat-marketplace-mn597\" (UID: \"f39d0e79-a252-4727-b8e7-41a566b49234\") " pod="openshift-marketplace/redhat-marketplace-mn597" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.283980 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f39d0e79-a252-4727-b8e7-41a566b49234-catalog-content\") pod \"redhat-marketplace-mn597\" (UID: \"f39d0e79-a252-4727-b8e7-41a566b49234\") " pod="openshift-marketplace/redhat-marketplace-mn597" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.283981 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f39d0e79-a252-4727-b8e7-41a566b49234-utilities\") pod \"redhat-marketplace-mn597\" (UID: \"f39d0e79-a252-4727-b8e7-41a566b49234\") " pod="openshift-marketplace/redhat-marketplace-mn597" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.284117 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlncf\" (UniqueName: \"kubernetes.io/projected/f39d0e79-a252-4727-b8e7-41a566b49234-kube-api-access-qlncf\") pod \"redhat-marketplace-mn597\" (UID: \"f39d0e79-a252-4727-b8e7-41a566b49234\") " pod="openshift-marketplace/redhat-marketplace-mn597" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.284324 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f39d0e79-a252-4727-b8e7-41a566b49234-utilities\") pod \"redhat-marketplace-mn597\" (UID: \"f39d0e79-a252-4727-b8e7-41a566b49234\") " pod="openshift-marketplace/redhat-marketplace-mn597" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.304266 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlncf\" (UniqueName: \"kubernetes.io/projected/f39d0e79-a252-4727-b8e7-41a566b49234-kube-api-access-qlncf\") pod \"redhat-marketplace-mn597\" (UID: \"f39d0e79-a252-4727-b8e7-41a566b49234\") " pod="openshift-marketplace/redhat-marketplace-mn597" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.348087 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mn597" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.377829 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2kqvb_06d7c0c7-2b68-478e-8113-abae661d30f6/cert-manager-controller/0.log" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.688578 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-2l2bm_8a78553e-d9dd-4f70-a1c2-ae2ebd4d01bd/cert-manager-cainjector/0.log" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.739826 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-9bcc7_b0cdbe98-e1d1-4844-a567-695916cc41f0/cert-manager-webhook/0.log" Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.892367 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn597"] Feb 02 12:27:05 crc kubenswrapper[4925]: I0202 12:27:05.919174 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn597" event={"ID":"f39d0e79-a252-4727-b8e7-41a566b49234","Type":"ContainerStarted","Data":"2fef61b93cbe3d60082c5c516d4958a1ea87cf8c3719111b466660c7d66a8d3c"} Feb 02 12:27:06 crc kubenswrapper[4925]: I0202 12:27:06.930464 4925 generic.go:334] "Generic (PLEG): container finished" podID="f39d0e79-a252-4727-b8e7-41a566b49234" containerID="83fa853a4b2a1813f882b11ca40badcfc905ccf7d3840fef6d094e923e26e213" exitCode=0 Feb 02 12:27:06 crc kubenswrapper[4925]: I0202 12:27:06.930567 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn597" event={"ID":"f39d0e79-a252-4727-b8e7-41a566b49234","Type":"ContainerDied","Data":"83fa853a4b2a1813f882b11ca40badcfc905ccf7d3840fef6d094e923e26e213"} Feb 02 12:27:06 crc kubenswrapper[4925]: I0202 12:27:06.932744 4925 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 12:27:08 crc kubenswrapper[4925]: I0202 12:27:08.948030 4925 generic.go:334] "Generic (PLEG): container finished" podID="f39d0e79-a252-4727-b8e7-41a566b49234" containerID="e5c2d1f2634b64f13d29bc98779e0ae1538e08b6bec03093a1782e6d04f3c909" exitCode=0 Feb 02 12:27:08 crc kubenswrapper[4925]: I0202 12:27:08.948127 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn597" event={"ID":"f39d0e79-a252-4727-b8e7-41a566b49234","Type":"ContainerDied","Data":"e5c2d1f2634b64f13d29bc98779e0ae1538e08b6bec03093a1782e6d04f3c909"} Feb 02 12:27:09 crc kubenswrapper[4925]: I0202 12:27:09.957182 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn597" event={"ID":"f39d0e79-a252-4727-b8e7-41a566b49234","Type":"ContainerStarted","Data":"8baf75343156e45811e2214b1a9ff47d6938d06a94d2c329100ec71c1ddbfb61"} Feb 02 12:27:09 crc kubenswrapper[4925]: I0202 12:27:09.977298 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mn597" podStartSLOduration=3.446026925 podStartE2EDuration="5.977277741s" podCreationTimestamp="2026-02-02 12:27:04 +0000 UTC" firstStartedPulling="2026-02-02 12:27:06.932337758 +0000 UTC m=+5403.936586710" lastFinishedPulling="2026-02-02 12:27:09.463588564 +0000 UTC m=+5406.467837526" observedRunningTime="2026-02-02 12:27:09.975465282 +0000 UTC m=+5406.979714244" watchObservedRunningTime="2026-02-02 12:27:09.977277741 +0000 UTC m=+5406.981526713" Feb 02 12:27:14 crc kubenswrapper[4925]: I0202 12:27:14.673105 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:27:15 crc kubenswrapper[4925]: I0202 12:27:15.348849 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mn597" Feb 02 12:27:15 crc kubenswrapper[4925]: I0202 12:27:15.349403 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mn597" Feb 02 12:27:15 crc kubenswrapper[4925]: I0202 12:27:15.402939 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mn597" Feb 02 12:27:16 crc kubenswrapper[4925]: I0202 12:27:16.010175 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"a128802a4a0d299eb9be866db4d78017908c5fe4686a29df2c4863a75a470b9a"} Feb 02 12:27:16 crc kubenswrapper[4925]: I0202 12:27:16.069889 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mn597" Feb 02 12:27:16 crc kubenswrapper[4925]: I0202 12:27:16.124305 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn597"] Feb 02 12:27:18 crc kubenswrapper[4925]: I0202 12:27:18.025339 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mn597" podUID="f39d0e79-a252-4727-b8e7-41a566b49234" containerName="registry-server" containerID="cri-o://8baf75343156e45811e2214b1a9ff47d6938d06a94d2c329100ec71c1ddbfb61" gracePeriod=2 Feb 02 12:27:18 crc kubenswrapper[4925]: I0202 12:27:18.421475 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-wvpzr_fdf9fdc0-d0bc-48eb-881f-9f053560d16d/nmstate-console-plugin/0.log" Feb 02 12:27:18 crc kubenswrapper[4925]: I0202 12:27:18.566302 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mn597" Feb 02 12:27:18 crc kubenswrapper[4925]: I0202 12:27:18.622831 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-26h6v_658d0400-3726-4797-a477-8d95c17ccd3a/nmstate-handler/0.log" Feb 02 12:27:18 crc kubenswrapper[4925]: I0202 12:27:18.636847 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f39d0e79-a252-4727-b8e7-41a566b49234-utilities\") pod \"f39d0e79-a252-4727-b8e7-41a566b49234\" (UID: \"f39d0e79-a252-4727-b8e7-41a566b49234\") " Feb 02 12:27:18 crc kubenswrapper[4925]: I0202 12:27:18.636997 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f39d0e79-a252-4727-b8e7-41a566b49234-catalog-content\") pod \"f39d0e79-a252-4727-b8e7-41a566b49234\" (UID: \"f39d0e79-a252-4727-b8e7-41a566b49234\") " Feb 02 12:27:18 crc kubenswrapper[4925]: I0202 12:27:18.637113 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlncf\" (UniqueName: \"kubernetes.io/projected/f39d0e79-a252-4727-b8e7-41a566b49234-kube-api-access-qlncf\") pod \"f39d0e79-a252-4727-b8e7-41a566b49234\" (UID: \"f39d0e79-a252-4727-b8e7-41a566b49234\") " Feb 02 12:27:18 crc kubenswrapper[4925]: I0202 12:27:18.638160 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f39d0e79-a252-4727-b8e7-41a566b49234-utilities" (OuterVolumeSpecName: "utilities") pod "f39d0e79-a252-4727-b8e7-41a566b49234" (UID: "f39d0e79-a252-4727-b8e7-41a566b49234"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:27:18 crc kubenswrapper[4925]: I0202 12:27:18.647240 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f39d0e79-a252-4727-b8e7-41a566b49234-kube-api-access-qlncf" (OuterVolumeSpecName: "kube-api-access-qlncf") pod "f39d0e79-a252-4727-b8e7-41a566b49234" (UID: "f39d0e79-a252-4727-b8e7-41a566b49234"). InnerVolumeSpecName "kube-api-access-qlncf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:27:18 crc kubenswrapper[4925]: I0202 12:27:18.687607 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f39d0e79-a252-4727-b8e7-41a566b49234-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f39d0e79-a252-4727-b8e7-41a566b49234" (UID: "f39d0e79-a252-4727-b8e7-41a566b49234"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:27:18 crc kubenswrapper[4925]: I0202 12:27:18.697343 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hzmqd_3d287bf3-d7ef-4ccf-ad54-c56563a8092c/kube-rbac-proxy/0.log" Feb 02 12:27:18 crc kubenswrapper[4925]: I0202 12:27:18.712361 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hzmqd_3d287bf3-d7ef-4ccf-ad54-c56563a8092c/nmstate-metrics/0.log" Feb 02 12:27:18 crc kubenswrapper[4925]: I0202 12:27:18.739842 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f39d0e79-a252-4727-b8e7-41a566b49234-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 12:27:18 crc kubenswrapper[4925]: I0202 12:27:18.739876 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlncf\" (UniqueName: \"kubernetes.io/projected/f39d0e79-a252-4727-b8e7-41a566b49234-kube-api-access-qlncf\") on node \"crc\" DevicePath \"\"" Feb 02 12:27:18 crc kubenswrapper[4925]: I0202 12:27:18.739887 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f39d0e79-a252-4727-b8e7-41a566b49234-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 12:27:18 crc kubenswrapper[4925]: I0202 12:27:18.871711 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-tm82s_fd2e1ecb-2c35-4496-8679-da6345ee07a2/nmstate-operator/0.log" Feb 02 12:27:18 crc kubenswrapper[4925]: I0202 12:27:18.972894 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-j84xr_21c65b61-7ab3-4ef7-b4d2-edef7c4df1bd/nmstate-webhook/0.log" Feb 02 12:27:19 crc kubenswrapper[4925]: I0202 12:27:19.033768 4925 generic.go:334] "Generic (PLEG): container finished" podID="f39d0e79-a252-4727-b8e7-41a566b49234" containerID="8baf75343156e45811e2214b1a9ff47d6938d06a94d2c329100ec71c1ddbfb61" exitCode=0 Feb 02 12:27:19 crc kubenswrapper[4925]: I0202 12:27:19.033811 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn597" event={"ID":"f39d0e79-a252-4727-b8e7-41a566b49234","Type":"ContainerDied","Data":"8baf75343156e45811e2214b1a9ff47d6938d06a94d2c329100ec71c1ddbfb61"} Feb 02 12:27:19 crc kubenswrapper[4925]: I0202 12:27:19.033837 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn597" event={"ID":"f39d0e79-a252-4727-b8e7-41a566b49234","Type":"ContainerDied","Data":"2fef61b93cbe3d60082c5c516d4958a1ea87cf8c3719111b466660c7d66a8d3c"} Feb 02 12:27:19 crc kubenswrapper[4925]: I0202 12:27:19.033853 4925 scope.go:117] "RemoveContainer" containerID="8baf75343156e45811e2214b1a9ff47d6938d06a94d2c329100ec71c1ddbfb61" Feb 02 12:27:19 crc kubenswrapper[4925]: I0202 12:27:19.033960 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mn597" Feb 02 12:27:19 crc kubenswrapper[4925]: I0202 12:27:19.068765 4925 scope.go:117] "RemoveContainer" containerID="e5c2d1f2634b64f13d29bc98779e0ae1538e08b6bec03093a1782e6d04f3c909" Feb 02 12:27:19 crc kubenswrapper[4925]: I0202 12:27:19.073289 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn597"] Feb 02 12:27:19 crc kubenswrapper[4925]: I0202 12:27:19.088917 4925 scope.go:117] "RemoveContainer" containerID="83fa853a4b2a1813f882b11ca40badcfc905ccf7d3840fef6d094e923e26e213" Feb 02 12:27:19 crc kubenswrapper[4925]: I0202 12:27:19.093823 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn597"] Feb 02 12:27:19 crc kubenswrapper[4925]: I0202 12:27:19.134210 4925 scope.go:117] "RemoveContainer" containerID="8baf75343156e45811e2214b1a9ff47d6938d06a94d2c329100ec71c1ddbfb61" Feb 02 12:27:19 crc kubenswrapper[4925]: E0202 12:27:19.135728 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8baf75343156e45811e2214b1a9ff47d6938d06a94d2c329100ec71c1ddbfb61\": container with ID starting with 8baf75343156e45811e2214b1a9ff47d6938d06a94d2c329100ec71c1ddbfb61 not found: ID does not exist" containerID="8baf75343156e45811e2214b1a9ff47d6938d06a94d2c329100ec71c1ddbfb61" Feb 02 12:27:19 crc kubenswrapper[4925]: I0202 12:27:19.135767 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8baf75343156e45811e2214b1a9ff47d6938d06a94d2c329100ec71c1ddbfb61"} err="failed to get container status \"8baf75343156e45811e2214b1a9ff47d6938d06a94d2c329100ec71c1ddbfb61\": rpc error: code = NotFound desc = could not find container \"8baf75343156e45811e2214b1a9ff47d6938d06a94d2c329100ec71c1ddbfb61\": container with ID starting with 8baf75343156e45811e2214b1a9ff47d6938d06a94d2c329100ec71c1ddbfb61 not found: ID does not exist" Feb 02 12:27:19 crc kubenswrapper[4925]: I0202 12:27:19.135792 4925 scope.go:117] "RemoveContainer" containerID="e5c2d1f2634b64f13d29bc98779e0ae1538e08b6bec03093a1782e6d04f3c909" Feb 02 12:27:19 crc kubenswrapper[4925]: E0202 12:27:19.136259 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5c2d1f2634b64f13d29bc98779e0ae1538e08b6bec03093a1782e6d04f3c909\": container with ID starting with e5c2d1f2634b64f13d29bc98779e0ae1538e08b6bec03093a1782e6d04f3c909 not found: ID does not exist" containerID="e5c2d1f2634b64f13d29bc98779e0ae1538e08b6bec03093a1782e6d04f3c909" Feb 02 12:27:19 crc kubenswrapper[4925]: I0202 12:27:19.136287 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5c2d1f2634b64f13d29bc98779e0ae1538e08b6bec03093a1782e6d04f3c909"} err="failed to get container status \"e5c2d1f2634b64f13d29bc98779e0ae1538e08b6bec03093a1782e6d04f3c909\": rpc error: code = NotFound desc = could not find container \"e5c2d1f2634b64f13d29bc98779e0ae1538e08b6bec03093a1782e6d04f3c909\": container with ID starting with e5c2d1f2634b64f13d29bc98779e0ae1538e08b6bec03093a1782e6d04f3c909 not found: ID does not exist" Feb 02 12:27:19 crc kubenswrapper[4925]: I0202 12:27:19.136306 4925 scope.go:117] "RemoveContainer" containerID="83fa853a4b2a1813f882b11ca40badcfc905ccf7d3840fef6d094e923e26e213" Feb 02 12:27:19 crc kubenswrapper[4925]: E0202 12:27:19.136510 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83fa853a4b2a1813f882b11ca40badcfc905ccf7d3840fef6d094e923e26e213\": container with ID starting with 83fa853a4b2a1813f882b11ca40badcfc905ccf7d3840fef6d094e923e26e213 not found: ID does not exist" containerID="83fa853a4b2a1813f882b11ca40badcfc905ccf7d3840fef6d094e923e26e213" Feb 02 12:27:19 crc kubenswrapper[4925]: I0202 12:27:19.136536 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83fa853a4b2a1813f882b11ca40badcfc905ccf7d3840fef6d094e923e26e213"} err="failed to get container status \"83fa853a4b2a1813f882b11ca40badcfc905ccf7d3840fef6d094e923e26e213\": rpc error: code = NotFound desc = could not find container \"83fa853a4b2a1813f882b11ca40badcfc905ccf7d3840fef6d094e923e26e213\": container with ID starting with 83fa853a4b2a1813f882b11ca40badcfc905ccf7d3840fef6d094e923e26e213 not found: ID does not exist" Feb 02 12:27:20 crc kubenswrapper[4925]: I0202 12:27:20.673458 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f39d0e79-a252-4727-b8e7-41a566b49234" path="/var/lib/kubelet/pods/f39d0e79-a252-4727-b8e7-41a566b49234/volumes" Feb 02 12:27:44 crc kubenswrapper[4925]: I0202 12:27:44.486387 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-t7z6x_09785fed-de18-4a9b-b32f-8a3644ede917/kube-rbac-proxy/0.log" Feb 02 12:27:44 crc kubenswrapper[4925]: I0202 12:27:44.549556 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-t7z6x_09785fed-de18-4a9b-b32f-8a3644ede917/controller/0.log" Feb 02 12:27:44 crc kubenswrapper[4925]: I0202 12:27:44.710124 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-frr-files/0.log" Feb 02 12:27:44 crc kubenswrapper[4925]: I0202 12:27:44.865767 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-reloader/0.log" Feb 02 12:27:44 crc kubenswrapper[4925]: I0202 12:27:44.873222 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-metrics/0.log" Feb 02 12:27:44 crc kubenswrapper[4925]: I0202 12:27:44.900243 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-frr-files/0.log" Feb 02 12:27:44 crc kubenswrapper[4925]: I0202 12:27:44.901552 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-reloader/0.log" Feb 02 12:27:45 crc kubenswrapper[4925]: I0202 12:27:45.084229 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-frr-files/0.log" Feb 02 12:27:45 crc kubenswrapper[4925]: I0202 12:27:45.117206 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-metrics/0.log" Feb 02 12:27:45 crc kubenswrapper[4925]: I0202 12:27:45.133596 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-reloader/0.log" Feb 02 12:27:45 crc kubenswrapper[4925]: I0202 12:27:45.145938 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-metrics/0.log" Feb 02 12:27:45 crc kubenswrapper[4925]: I0202 12:27:45.312821 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-frr-files/0.log" Feb 02 12:27:45 crc kubenswrapper[4925]: I0202 12:27:45.312824 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-metrics/0.log" Feb 02 12:27:45 crc kubenswrapper[4925]: I0202 12:27:45.338032 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/cp-reloader/0.log" Feb 02 12:27:45 crc kubenswrapper[4925]: I0202 12:27:45.359532 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/controller/0.log" Feb 02 12:27:45 crc kubenswrapper[4925]: I0202 12:27:45.518751 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/kube-rbac-proxy/0.log" Feb 02 12:27:45 crc kubenswrapper[4925]: I0202 12:27:45.541476 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/frr-metrics/0.log" Feb 02 12:27:45 crc kubenswrapper[4925]: I0202 12:27:45.572656 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/kube-rbac-proxy-frr/0.log" Feb 02 12:27:45 crc kubenswrapper[4925]: I0202 12:27:45.796351 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/reloader/0.log" Feb 02 12:27:45 crc kubenswrapper[4925]: I0202 12:27:45.814750 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-5fpmg_fa78a5ba-04ae-4ff3-85f1-6c95530e3ff2/frr-k8s-webhook-server/0.log" Feb 02 12:27:46 crc kubenswrapper[4925]: I0202 12:27:46.042502 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7c47d49988-6g6jm_5f1c0635-1bd7-4997-b0bd-5f57e7bd2893/manager/0.log" Feb 02 12:27:46 crc kubenswrapper[4925]: I0202 12:27:46.267338 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dqhvw_263f4c60-783f-4109-bcf6-cbdd5e03ec0e/kube-rbac-proxy/0.log" Feb 02 12:27:46 crc kubenswrapper[4925]: I0202 12:27:46.302120 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5754578b6f-nb2dq_0876b510-fef0-4243-b650-8369e62c4a93/webhook-server/0.log" Feb 02 12:27:46 crc kubenswrapper[4925]: I0202 12:27:46.988428 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dqhvw_263f4c60-783f-4109-bcf6-cbdd5e03ec0e/speaker/0.log" Feb 02 12:27:47 crc kubenswrapper[4925]: I0202 12:27:47.358690 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qx9hq_04f8da8f-7d17-4f0d-9fb2-5a66470d62dd/frr/0.log" Feb 02 12:27:58 crc kubenswrapper[4925]: I0202 12:27:58.987883 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns_4331a7b0-93b0-40b7-9b53-77b0664942b8/util/0.log" Feb 02 12:27:59 crc kubenswrapper[4925]: I0202 12:27:59.126626 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns_4331a7b0-93b0-40b7-9b53-77b0664942b8/pull/0.log" Feb 02 12:27:59 crc kubenswrapper[4925]: I0202 12:27:59.143566 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns_4331a7b0-93b0-40b7-9b53-77b0664942b8/util/0.log" Feb 02 12:27:59 crc kubenswrapper[4925]: I0202 12:27:59.179732 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns_4331a7b0-93b0-40b7-9b53-77b0664942b8/pull/0.log" Feb 02 12:27:59 crc kubenswrapper[4925]: I0202 12:27:59.376152 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns_4331a7b0-93b0-40b7-9b53-77b0664942b8/extract/0.log" Feb 02 12:27:59 crc kubenswrapper[4925]: I0202 12:27:59.392607 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns_4331a7b0-93b0-40b7-9b53-77b0664942b8/util/0.log" Feb 02 12:27:59 crc kubenswrapper[4925]: I0202 12:27:59.451856 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcpmgns_4331a7b0-93b0-40b7-9b53-77b0664942b8/pull/0.log" Feb 02 12:27:59 crc kubenswrapper[4925]: I0202 12:27:59.565609 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_6643842e-f888-4afc-ac1c-c2e7ef17360d/util/0.log" Feb 02 12:27:59 crc kubenswrapper[4925]: I0202 12:27:59.702637 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_6643842e-f888-4afc-ac1c-c2e7ef17360d/pull/0.log" Feb 02 12:27:59 crc kubenswrapper[4925]: I0202 12:27:59.742056 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_6643842e-f888-4afc-ac1c-c2e7ef17360d/pull/0.log" Feb 02 12:27:59 crc kubenswrapper[4925]: I0202 12:27:59.744987 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_6643842e-f888-4afc-ac1c-c2e7ef17360d/util/0.log" Feb 02 12:27:59 crc kubenswrapper[4925]: I0202 12:27:59.897618 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_6643842e-f888-4afc-ac1c-c2e7ef17360d/pull/0.log" Feb 02 12:27:59 crc kubenswrapper[4925]: I0202 12:27:59.918783 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_6643842e-f888-4afc-ac1c-c2e7ef17360d/util/0.log" Feb 02 12:27:59 crc kubenswrapper[4925]: I0202 12:27:59.957636 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713znk8v_6643842e-f888-4afc-ac1c-c2e7ef17360d/extract/0.log" Feb 02 12:28:00 crc kubenswrapper[4925]: I0202 12:28:00.054639 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2f6j4_23cca3fd-3790-4add-a724-50721c42fe9d/extract-utilities/0.log" Feb 02 12:28:00 crc kubenswrapper[4925]: I0202 12:28:00.219039 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2f6j4_23cca3fd-3790-4add-a724-50721c42fe9d/extract-utilities/0.log" Feb 02 12:28:00 crc kubenswrapper[4925]: I0202 12:28:00.250818 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2f6j4_23cca3fd-3790-4add-a724-50721c42fe9d/extract-content/0.log" Feb 02 12:28:00 crc kubenswrapper[4925]: I0202 12:28:00.251911 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2f6j4_23cca3fd-3790-4add-a724-50721c42fe9d/extract-content/0.log" Feb 02 12:28:00 crc kubenswrapper[4925]: I0202 12:28:00.441321 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2f6j4_23cca3fd-3790-4add-a724-50721c42fe9d/extract-content/0.log" Feb 02 12:28:00 crc kubenswrapper[4925]: I0202 12:28:00.466583 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2f6j4_23cca3fd-3790-4add-a724-50721c42fe9d/extract-utilities/0.log" Feb 02 12:28:00 crc kubenswrapper[4925]: I0202 12:28:00.670794 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6r7t_7094aa75-75ce-4d8b-b2be-dd34f846d5fe/extract-utilities/0.log" Feb 02 12:28:00 crc kubenswrapper[4925]: I0202 12:28:00.953180 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6r7t_7094aa75-75ce-4d8b-b2be-dd34f846d5fe/extract-utilities/0.log" Feb 02 12:28:00 crc kubenswrapper[4925]: I0202 12:28:00.990485 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6r7t_7094aa75-75ce-4d8b-b2be-dd34f846d5fe/extract-content/0.log" Feb 02 12:28:01 crc kubenswrapper[4925]: I0202 12:28:01.021871 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6r7t_7094aa75-75ce-4d8b-b2be-dd34f846d5fe/extract-content/0.log" Feb 02 12:28:01 crc kubenswrapper[4925]: I0202 12:28:01.134062 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2f6j4_23cca3fd-3790-4add-a724-50721c42fe9d/registry-server/0.log" Feb 02 12:28:01 crc kubenswrapper[4925]: I0202 12:28:01.201022 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6r7t_7094aa75-75ce-4d8b-b2be-dd34f846d5fe/extract-utilities/0.log" Feb 02 12:28:01 crc kubenswrapper[4925]: I0202 12:28:01.221843 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6r7t_7094aa75-75ce-4d8b-b2be-dd34f846d5fe/extract-content/0.log" Feb 02 12:28:01 crc kubenswrapper[4925]: I0202 12:28:01.421713 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6p5nd_7ed2f286-6b23-4789-9f42-9da9d276812e/marketplace-operator/0.log" Feb 02 12:28:01 crc kubenswrapper[4925]: I0202 12:28:01.653674 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5w4mz_25f59ff6-4459-41ea-ab79-373c701ffcc3/extract-utilities/0.log" Feb 02 12:28:01 crc kubenswrapper[4925]: I0202 12:28:01.835512 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5w4mz_25f59ff6-4459-41ea-ab79-373c701ffcc3/extract-utilities/0.log" Feb 02 12:28:01 crc kubenswrapper[4925]: I0202 12:28:01.861659 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5w4mz_25f59ff6-4459-41ea-ab79-373c701ffcc3/extract-content/0.log" Feb 02 12:28:01 crc kubenswrapper[4925]: I0202 12:28:01.976916 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5w4mz_25f59ff6-4459-41ea-ab79-373c701ffcc3/extract-content/0.log" Feb 02 12:28:02 crc kubenswrapper[4925]: I0202 12:28:02.003762 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6r7t_7094aa75-75ce-4d8b-b2be-dd34f846d5fe/registry-server/0.log" Feb 02 12:28:02 crc kubenswrapper[4925]: I0202 12:28:02.131977 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5w4mz_25f59ff6-4459-41ea-ab79-373c701ffcc3/extract-content/0.log" Feb 02 12:28:02 crc kubenswrapper[4925]: I0202 12:28:02.215131 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5w4mz_25f59ff6-4459-41ea-ab79-373c701ffcc3/extract-utilities/0.log" Feb 02 12:28:02 crc kubenswrapper[4925]: I0202 12:28:02.334044 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5w4mz_25f59ff6-4459-41ea-ab79-373c701ffcc3/registry-server/0.log" Feb 02 12:28:02 crc kubenswrapper[4925]: I0202 12:28:02.383131 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z4qt5_5928e7ce-0012-48b1-9187-d35097e13692/extract-utilities/0.log" Feb 02 12:28:02 crc kubenswrapper[4925]: I0202 12:28:02.544413 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z4qt5_5928e7ce-0012-48b1-9187-d35097e13692/extract-content/0.log" Feb 02 12:28:02 crc kubenswrapper[4925]: I0202 12:28:02.557672 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z4qt5_5928e7ce-0012-48b1-9187-d35097e13692/extract-content/0.log" Feb 02 12:28:02 crc kubenswrapper[4925]: I0202 12:28:02.569453 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z4qt5_5928e7ce-0012-48b1-9187-d35097e13692/extract-utilities/0.log" Feb 02 12:28:02 crc kubenswrapper[4925]: I0202 12:28:02.775654 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z4qt5_5928e7ce-0012-48b1-9187-d35097e13692/extract-utilities/0.log" Feb 02 12:28:02 crc kubenswrapper[4925]: I0202 12:28:02.791310 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z4qt5_5928e7ce-0012-48b1-9187-d35097e13692/extract-content/0.log" Feb 02 12:28:03 crc kubenswrapper[4925]: I0202 12:28:03.375578 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z4qt5_5928e7ce-0012-48b1-9187-d35097e13692/registry-server/0.log" Feb 02 12:29:43 crc kubenswrapper[4925]: I0202 12:29:43.399017 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:29:43 crc kubenswrapper[4925]: I0202 12:29:43.399593 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.159177 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2"] Feb 02 12:30:00 crc kubenswrapper[4925]: E0202 12:30:00.160168 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f39d0e79-a252-4727-b8e7-41a566b49234" containerName="extract-content" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.160187 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f39d0e79-a252-4727-b8e7-41a566b49234" containerName="extract-content" Feb 02 12:30:00 crc kubenswrapper[4925]: E0202 12:30:00.160222 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f39d0e79-a252-4727-b8e7-41a566b49234" containerName="registry-server" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.160229 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f39d0e79-a252-4727-b8e7-41a566b49234" containerName="registry-server" Feb 02 12:30:00 crc kubenswrapper[4925]: E0202 12:30:00.160245 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f39d0e79-a252-4727-b8e7-41a566b49234" containerName="extract-utilities" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.160251 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="f39d0e79-a252-4727-b8e7-41a566b49234" containerName="extract-utilities" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.160441 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="f39d0e79-a252-4727-b8e7-41a566b49234" containerName="registry-server" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.161167 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.164185 4925 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.164401 4925 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.197068 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2"] Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.241329 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-config-volume\") pod \"collect-profiles-29500590-8bvz2\" (UID: \"734f6adf-36e9-48e4-9366-a5a0b10a3d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.241384 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgm75\" (UniqueName: \"kubernetes.io/projected/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-kube-api-access-jgm75\") pod \"collect-profiles-29500590-8bvz2\" (UID: \"734f6adf-36e9-48e4-9366-a5a0b10a3d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.241518 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-secret-volume\") pod \"collect-profiles-29500590-8bvz2\" (UID: \"734f6adf-36e9-48e4-9366-a5a0b10a3d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.342320 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-config-volume\") pod \"collect-profiles-29500590-8bvz2\" (UID: \"734f6adf-36e9-48e4-9366-a5a0b10a3d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.342370 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgm75\" (UniqueName: \"kubernetes.io/projected/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-kube-api-access-jgm75\") pod \"collect-profiles-29500590-8bvz2\" (UID: \"734f6adf-36e9-48e4-9366-a5a0b10a3d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.342482 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-secret-volume\") pod \"collect-profiles-29500590-8bvz2\" (UID: \"734f6adf-36e9-48e4-9366-a5a0b10a3d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.343561 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-config-volume\") pod \"collect-profiles-29500590-8bvz2\" (UID: \"734f6adf-36e9-48e4-9366-a5a0b10a3d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.362020 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-secret-volume\") pod \"collect-profiles-29500590-8bvz2\" (UID: \"734f6adf-36e9-48e4-9366-a5a0b10a3d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.363052 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgm75\" (UniqueName: \"kubernetes.io/projected/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-kube-api-access-jgm75\") pod \"collect-profiles-29500590-8bvz2\" (UID: \"734f6adf-36e9-48e4-9366-a5a0b10a3d9e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.483447 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2" Feb 02 12:30:00 crc kubenswrapper[4925]: I0202 12:30:00.961694 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2"] Feb 02 12:30:01 crc kubenswrapper[4925]: I0202 12:30:01.855952 4925 generic.go:334] "Generic (PLEG): container finished" podID="734f6adf-36e9-48e4-9366-a5a0b10a3d9e" containerID="3afbaef99256acaf62324341eaa28ffaa1bfa7af0a59655978465d05604e014f" exitCode=0 Feb 02 12:30:01 crc kubenswrapper[4925]: I0202 12:30:01.856229 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2" event={"ID":"734f6adf-36e9-48e4-9366-a5a0b10a3d9e","Type":"ContainerDied","Data":"3afbaef99256acaf62324341eaa28ffaa1bfa7af0a59655978465d05604e014f"} Feb 02 12:30:01 crc kubenswrapper[4925]: I0202 12:30:01.856255 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2" event={"ID":"734f6adf-36e9-48e4-9366-a5a0b10a3d9e","Type":"ContainerStarted","Data":"81139c7253e4d956897666364ea99c6a6bcaa8ba13e33539a4f087474cf4e7c1"} Feb 02 12:30:03 crc kubenswrapper[4925]: I0202 12:30:03.248314 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2" Feb 02 12:30:03 crc kubenswrapper[4925]: I0202 12:30:03.419193 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-secret-volume\") pod \"734f6adf-36e9-48e4-9366-a5a0b10a3d9e\" (UID: \"734f6adf-36e9-48e4-9366-a5a0b10a3d9e\") " Feb 02 12:30:03 crc kubenswrapper[4925]: I0202 12:30:03.419532 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgm75\" (UniqueName: \"kubernetes.io/projected/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-kube-api-access-jgm75\") pod \"734f6adf-36e9-48e4-9366-a5a0b10a3d9e\" (UID: \"734f6adf-36e9-48e4-9366-a5a0b10a3d9e\") " Feb 02 12:30:03 crc kubenswrapper[4925]: I0202 12:30:03.419592 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-config-volume\") pod \"734f6adf-36e9-48e4-9366-a5a0b10a3d9e\" (UID: \"734f6adf-36e9-48e4-9366-a5a0b10a3d9e\") " Feb 02 12:30:03 crc kubenswrapper[4925]: I0202 12:30:03.423299 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-config-volume" (OuterVolumeSpecName: "config-volume") pod "734f6adf-36e9-48e4-9366-a5a0b10a3d9e" (UID: "734f6adf-36e9-48e4-9366-a5a0b10a3d9e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 12:30:03 crc kubenswrapper[4925]: I0202 12:30:03.424364 4925 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 12:30:03 crc kubenswrapper[4925]: I0202 12:30:03.429731 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "734f6adf-36e9-48e4-9366-a5a0b10a3d9e" (UID: "734f6adf-36e9-48e4-9366-a5a0b10a3d9e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:30:03 crc kubenswrapper[4925]: I0202 12:30:03.444488 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-kube-api-access-jgm75" (OuterVolumeSpecName: "kube-api-access-jgm75") pod "734f6adf-36e9-48e4-9366-a5a0b10a3d9e" (UID: "734f6adf-36e9-48e4-9366-a5a0b10a3d9e"). InnerVolumeSpecName "kube-api-access-jgm75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:30:03 crc kubenswrapper[4925]: I0202 12:30:03.526429 4925 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 12:30:03 crc kubenswrapper[4925]: I0202 12:30:03.526477 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgm75\" (UniqueName: \"kubernetes.io/projected/734f6adf-36e9-48e4-9366-a5a0b10a3d9e-kube-api-access-jgm75\") on node \"crc\" DevicePath \"\"" Feb 02 12:30:03 crc kubenswrapper[4925]: I0202 12:30:03.873606 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2" event={"ID":"734f6adf-36e9-48e4-9366-a5a0b10a3d9e","Type":"ContainerDied","Data":"81139c7253e4d956897666364ea99c6a6bcaa8ba13e33539a4f087474cf4e7c1"} Feb 02 12:30:03 crc kubenswrapper[4925]: I0202 12:30:03.873653 4925 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81139c7253e4d956897666364ea99c6a6bcaa8ba13e33539a4f087474cf4e7c1" Feb 02 12:30:03 crc kubenswrapper[4925]: I0202 12:30:03.873704 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500590-8bvz2" Feb 02 12:30:04 crc kubenswrapper[4925]: I0202 12:30:04.333857 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq"] Feb 02 12:30:04 crc kubenswrapper[4925]: I0202 12:30:04.342283 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500545-g6zmq"] Feb 02 12:30:04 crc kubenswrapper[4925]: I0202 12:30:04.691258 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d283885-9df9-497f-ab4c-3faf24639605" path="/var/lib/kubelet/pods/2d283885-9df9-497f-ab4c-3faf24639605/volumes" Feb 02 12:30:04 crc kubenswrapper[4925]: I0202 12:30:04.882416 4925 generic.go:334] "Generic (PLEG): container finished" podID="7e7bc800-7224-4c0f-9e73-f93a1ad76039" containerID="eaa2ae3b0f057c850ef93333acc00b1ea5ff5fa80e3ae414e3deae46d5c1e5bc" exitCode=0 Feb 02 12:30:04 crc kubenswrapper[4925]: I0202 12:30:04.882465 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fz5vx/must-gather-v4wb9" event={"ID":"7e7bc800-7224-4c0f-9e73-f93a1ad76039","Type":"ContainerDied","Data":"eaa2ae3b0f057c850ef93333acc00b1ea5ff5fa80e3ae414e3deae46d5c1e5bc"} Feb 02 12:30:04 crc kubenswrapper[4925]: I0202 12:30:04.883235 4925 scope.go:117] "RemoveContainer" containerID="eaa2ae3b0f057c850ef93333acc00b1ea5ff5fa80e3ae414e3deae46d5c1e5bc" Feb 02 12:30:05 crc kubenswrapper[4925]: I0202 12:30:05.820036 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fz5vx_must-gather-v4wb9_7e7bc800-7224-4c0f-9e73-f93a1ad76039/gather/0.log" Feb 02 12:30:13 crc kubenswrapper[4925]: I0202 12:30:13.398303 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:30:13 crc kubenswrapper[4925]: I0202 12:30:13.398734 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:30:17 crc kubenswrapper[4925]: I0202 12:30:17.694885 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fz5vx/must-gather-v4wb9"] Feb 02 12:30:17 crc kubenswrapper[4925]: I0202 12:30:17.695917 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-fz5vx/must-gather-v4wb9" podUID="7e7bc800-7224-4c0f-9e73-f93a1ad76039" containerName="copy" containerID="cri-o://f7403ca33f2edc36fe3968547d2efead1248054be915b4ebc1f37e825c281fcb" gracePeriod=2 Feb 02 12:30:17 crc kubenswrapper[4925]: I0202 12:30:17.706993 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fz5vx/must-gather-v4wb9"] Feb 02 12:30:18 crc kubenswrapper[4925]: I0202 12:30:18.005322 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fz5vx_must-gather-v4wb9_7e7bc800-7224-4c0f-9e73-f93a1ad76039/copy/0.log" Feb 02 12:30:18 crc kubenswrapper[4925]: I0202 12:30:18.005970 4925 generic.go:334] "Generic (PLEG): container finished" podID="7e7bc800-7224-4c0f-9e73-f93a1ad76039" containerID="f7403ca33f2edc36fe3968547d2efead1248054be915b4ebc1f37e825c281fcb" exitCode=143 Feb 02 12:30:18 crc kubenswrapper[4925]: I0202 12:30:18.132713 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fz5vx_must-gather-v4wb9_7e7bc800-7224-4c0f-9e73-f93a1ad76039/copy/0.log" Feb 02 12:30:18 crc kubenswrapper[4925]: I0202 12:30:18.133155 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fz5vx/must-gather-v4wb9" Feb 02 12:30:18 crc kubenswrapper[4925]: I0202 12:30:18.235824 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plfbs\" (UniqueName: \"kubernetes.io/projected/7e7bc800-7224-4c0f-9e73-f93a1ad76039-kube-api-access-plfbs\") pod \"7e7bc800-7224-4c0f-9e73-f93a1ad76039\" (UID: \"7e7bc800-7224-4c0f-9e73-f93a1ad76039\") " Feb 02 12:30:18 crc kubenswrapper[4925]: I0202 12:30:18.235971 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7e7bc800-7224-4c0f-9e73-f93a1ad76039-must-gather-output\") pod \"7e7bc800-7224-4c0f-9e73-f93a1ad76039\" (UID: \"7e7bc800-7224-4c0f-9e73-f93a1ad76039\") " Feb 02 12:30:18 crc kubenswrapper[4925]: I0202 12:30:18.242007 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e7bc800-7224-4c0f-9e73-f93a1ad76039-kube-api-access-plfbs" (OuterVolumeSpecName: "kube-api-access-plfbs") pod "7e7bc800-7224-4c0f-9e73-f93a1ad76039" (UID: "7e7bc800-7224-4c0f-9e73-f93a1ad76039"). InnerVolumeSpecName "kube-api-access-plfbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:30:18 crc kubenswrapper[4925]: I0202 12:30:18.338598 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plfbs\" (UniqueName: \"kubernetes.io/projected/7e7bc800-7224-4c0f-9e73-f93a1ad76039-kube-api-access-plfbs\") on node \"crc\" DevicePath \"\"" Feb 02 12:30:18 crc kubenswrapper[4925]: I0202 12:30:18.406763 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e7bc800-7224-4c0f-9e73-f93a1ad76039-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7e7bc800-7224-4c0f-9e73-f93a1ad76039" (UID: "7e7bc800-7224-4c0f-9e73-f93a1ad76039"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:30:18 crc kubenswrapper[4925]: I0202 12:30:18.440903 4925 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7e7bc800-7224-4c0f-9e73-f93a1ad76039-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 12:30:18 crc kubenswrapper[4925]: I0202 12:30:18.677845 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e7bc800-7224-4c0f-9e73-f93a1ad76039" path="/var/lib/kubelet/pods/7e7bc800-7224-4c0f-9e73-f93a1ad76039/volumes" Feb 02 12:30:19 crc kubenswrapper[4925]: I0202 12:30:19.014111 4925 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fz5vx_must-gather-v4wb9_7e7bc800-7224-4c0f-9e73-f93a1ad76039/copy/0.log" Feb 02 12:30:19 crc kubenswrapper[4925]: I0202 12:30:19.014458 4925 scope.go:117] "RemoveContainer" containerID="f7403ca33f2edc36fe3968547d2efead1248054be915b4ebc1f37e825c281fcb" Feb 02 12:30:19 crc kubenswrapper[4925]: I0202 12:30:19.014604 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fz5vx/must-gather-v4wb9" Feb 02 12:30:19 crc kubenswrapper[4925]: I0202 12:30:19.036176 4925 scope.go:117] "RemoveContainer" containerID="eaa2ae3b0f057c850ef93333acc00b1ea5ff5fa80e3ae414e3deae46d5c1e5bc" Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.681744 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z6zb4"] Feb 02 12:30:21 crc kubenswrapper[4925]: E0202 12:30:21.682499 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7bc800-7224-4c0f-9e73-f93a1ad76039" containerName="gather" Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.682515 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7bc800-7224-4c0f-9e73-f93a1ad76039" containerName="gather" Feb 02 12:30:21 crc kubenswrapper[4925]: E0202 12:30:21.682540 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7bc800-7224-4c0f-9e73-f93a1ad76039" containerName="copy" Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.682548 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7bc800-7224-4c0f-9e73-f93a1ad76039" containerName="copy" Feb 02 12:30:21 crc kubenswrapper[4925]: E0202 12:30:21.682560 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="734f6adf-36e9-48e4-9366-a5a0b10a3d9e" containerName="collect-profiles" Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.682569 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="734f6adf-36e9-48e4-9366-a5a0b10a3d9e" containerName="collect-profiles" Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.682797 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7bc800-7224-4c0f-9e73-f93a1ad76039" containerName="gather" Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.682820 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7bc800-7224-4c0f-9e73-f93a1ad76039" containerName="copy" Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.682840 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="734f6adf-36e9-48e4-9366-a5a0b10a3d9e" containerName="collect-profiles" Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.684550 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6zb4" Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.707126 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z6zb4"] Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.807120 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3262e0c3-5db2-43f9-8638-73ac262903d3-catalog-content\") pod \"community-operators-z6zb4\" (UID: \"3262e0c3-5db2-43f9-8638-73ac262903d3\") " pod="openshift-marketplace/community-operators-z6zb4" Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.807185 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s975s\" (UniqueName: \"kubernetes.io/projected/3262e0c3-5db2-43f9-8638-73ac262903d3-kube-api-access-s975s\") pod \"community-operators-z6zb4\" (UID: \"3262e0c3-5db2-43f9-8638-73ac262903d3\") " pod="openshift-marketplace/community-operators-z6zb4" Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.807439 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3262e0c3-5db2-43f9-8638-73ac262903d3-utilities\") pod \"community-operators-z6zb4\" (UID: \"3262e0c3-5db2-43f9-8638-73ac262903d3\") " pod="openshift-marketplace/community-operators-z6zb4" Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.909280 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3262e0c3-5db2-43f9-8638-73ac262903d3-utilities\") pod \"community-operators-z6zb4\" (UID: \"3262e0c3-5db2-43f9-8638-73ac262903d3\") " pod="openshift-marketplace/community-operators-z6zb4" Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.909357 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3262e0c3-5db2-43f9-8638-73ac262903d3-catalog-content\") pod \"community-operators-z6zb4\" (UID: \"3262e0c3-5db2-43f9-8638-73ac262903d3\") " pod="openshift-marketplace/community-operators-z6zb4" Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.909393 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s975s\" (UniqueName: \"kubernetes.io/projected/3262e0c3-5db2-43f9-8638-73ac262903d3-kube-api-access-s975s\") pod \"community-operators-z6zb4\" (UID: \"3262e0c3-5db2-43f9-8638-73ac262903d3\") " pod="openshift-marketplace/community-operators-z6zb4" Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.909761 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3262e0c3-5db2-43f9-8638-73ac262903d3-utilities\") pod \"community-operators-z6zb4\" (UID: \"3262e0c3-5db2-43f9-8638-73ac262903d3\") " pod="openshift-marketplace/community-operators-z6zb4" Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.910052 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3262e0c3-5db2-43f9-8638-73ac262903d3-catalog-content\") pod \"community-operators-z6zb4\" (UID: \"3262e0c3-5db2-43f9-8638-73ac262903d3\") " pod="openshift-marketplace/community-operators-z6zb4" Feb 02 12:30:21 crc kubenswrapper[4925]: I0202 12:30:21.928606 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s975s\" (UniqueName: \"kubernetes.io/projected/3262e0c3-5db2-43f9-8638-73ac262903d3-kube-api-access-s975s\") pod \"community-operators-z6zb4\" (UID: \"3262e0c3-5db2-43f9-8638-73ac262903d3\") " pod="openshift-marketplace/community-operators-z6zb4" Feb 02 12:30:22 crc kubenswrapper[4925]: I0202 12:30:22.005536 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6zb4" Feb 02 12:30:22 crc kubenswrapper[4925]: I0202 12:30:22.336198 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z6zb4"] Feb 02 12:30:23 crc kubenswrapper[4925]: I0202 12:30:23.051014 4925 generic.go:334] "Generic (PLEG): container finished" podID="3262e0c3-5db2-43f9-8638-73ac262903d3" containerID="417b2f6b592921f8233adf2cda8115215296a6464e1a6b0a7b5d47b2f416f140" exitCode=0 Feb 02 12:30:23 crc kubenswrapper[4925]: I0202 12:30:23.051095 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6zb4" event={"ID":"3262e0c3-5db2-43f9-8638-73ac262903d3","Type":"ContainerDied","Data":"417b2f6b592921f8233adf2cda8115215296a6464e1a6b0a7b5d47b2f416f140"} Feb 02 12:30:23 crc kubenswrapper[4925]: I0202 12:30:23.051389 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6zb4" event={"ID":"3262e0c3-5db2-43f9-8638-73ac262903d3","Type":"ContainerStarted","Data":"f5aac6589879b11a4f0ab598ca97934fcd939cc8733256720c487833b630bab5"} Feb 02 12:30:24 crc kubenswrapper[4925]: I0202 12:30:24.082157 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6zb4" event={"ID":"3262e0c3-5db2-43f9-8638-73ac262903d3","Type":"ContainerStarted","Data":"da39baf8bb1541817fb4edfa36c058c31db98ed28ddc6d9c30cd6d92f9ee619e"} Feb 02 12:30:25 crc kubenswrapper[4925]: I0202 12:30:25.092781 4925 generic.go:334] "Generic (PLEG): container finished" podID="3262e0c3-5db2-43f9-8638-73ac262903d3" containerID="da39baf8bb1541817fb4edfa36c058c31db98ed28ddc6d9c30cd6d92f9ee619e" exitCode=0 Feb 02 12:30:25 crc kubenswrapper[4925]: I0202 12:30:25.092847 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6zb4" event={"ID":"3262e0c3-5db2-43f9-8638-73ac262903d3","Type":"ContainerDied","Data":"da39baf8bb1541817fb4edfa36c058c31db98ed28ddc6d9c30cd6d92f9ee619e"} Feb 02 12:30:25 crc kubenswrapper[4925]: I0202 12:30:25.093132 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6zb4" event={"ID":"3262e0c3-5db2-43f9-8638-73ac262903d3","Type":"ContainerStarted","Data":"1e3cd034ec75864d385ca53182cf8d092b24f2fb8fb1ac229d23fb026fb032e2"} Feb 02 12:30:25 crc kubenswrapper[4925]: I0202 12:30:25.117465 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z6zb4" podStartSLOduration=2.669769296 podStartE2EDuration="4.117450353s" podCreationTimestamp="2026-02-02 12:30:21 +0000 UTC" firstStartedPulling="2026-02-02 12:30:23.054129256 +0000 UTC m=+5600.058378218" lastFinishedPulling="2026-02-02 12:30:24.501810313 +0000 UTC m=+5601.506059275" observedRunningTime="2026-02-02 12:30:25.112273003 +0000 UTC m=+5602.116521975" watchObservedRunningTime="2026-02-02 12:30:25.117450353 +0000 UTC m=+5602.121699315" Feb 02 12:30:31 crc kubenswrapper[4925]: I0202 12:30:31.693592 4925 scope.go:117] "RemoveContainer" containerID="243a3fc76ee4f11550e58b10089a69390b2f5d00d6a0d90a913c3a65a10fb449" Feb 02 12:30:32 crc kubenswrapper[4925]: I0202 12:30:32.006130 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z6zb4" Feb 02 12:30:32 crc kubenswrapper[4925]: I0202 12:30:32.006558 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z6zb4" Feb 02 12:30:32 crc kubenswrapper[4925]: I0202 12:30:32.059627 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z6zb4" Feb 02 12:30:32 crc kubenswrapper[4925]: I0202 12:30:32.217927 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z6zb4" Feb 02 12:30:32 crc kubenswrapper[4925]: I0202 12:30:32.298933 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z6zb4"] Feb 02 12:30:34 crc kubenswrapper[4925]: I0202 12:30:34.178359 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z6zb4" podUID="3262e0c3-5db2-43f9-8638-73ac262903d3" containerName="registry-server" containerID="cri-o://1e3cd034ec75864d385ca53182cf8d092b24f2fb8fb1ac229d23fb026fb032e2" gracePeriod=2 Feb 02 12:30:35 crc kubenswrapper[4925]: I0202 12:30:35.190845 4925 generic.go:334] "Generic (PLEG): container finished" podID="3262e0c3-5db2-43f9-8638-73ac262903d3" containerID="1e3cd034ec75864d385ca53182cf8d092b24f2fb8fb1ac229d23fb026fb032e2" exitCode=0 Feb 02 12:30:35 crc kubenswrapper[4925]: I0202 12:30:35.190919 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6zb4" event={"ID":"3262e0c3-5db2-43f9-8638-73ac262903d3","Type":"ContainerDied","Data":"1e3cd034ec75864d385ca53182cf8d092b24f2fb8fb1ac229d23fb026fb032e2"} Feb 02 12:30:35 crc kubenswrapper[4925]: I0202 12:30:35.402634 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6zb4" Feb 02 12:30:35 crc kubenswrapper[4925]: I0202 12:30:35.597880 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3262e0c3-5db2-43f9-8638-73ac262903d3-catalog-content\") pod \"3262e0c3-5db2-43f9-8638-73ac262903d3\" (UID: \"3262e0c3-5db2-43f9-8638-73ac262903d3\") " Feb 02 12:30:35 crc kubenswrapper[4925]: I0202 12:30:35.598009 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3262e0c3-5db2-43f9-8638-73ac262903d3-utilities\") pod \"3262e0c3-5db2-43f9-8638-73ac262903d3\" (UID: \"3262e0c3-5db2-43f9-8638-73ac262903d3\") " Feb 02 12:30:35 crc kubenswrapper[4925]: I0202 12:30:35.598201 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s975s\" (UniqueName: \"kubernetes.io/projected/3262e0c3-5db2-43f9-8638-73ac262903d3-kube-api-access-s975s\") pod \"3262e0c3-5db2-43f9-8638-73ac262903d3\" (UID: \"3262e0c3-5db2-43f9-8638-73ac262903d3\") " Feb 02 12:30:35 crc kubenswrapper[4925]: I0202 12:30:35.600313 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3262e0c3-5db2-43f9-8638-73ac262903d3-utilities" (OuterVolumeSpecName: "utilities") pod "3262e0c3-5db2-43f9-8638-73ac262903d3" (UID: "3262e0c3-5db2-43f9-8638-73ac262903d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:30:35 crc kubenswrapper[4925]: I0202 12:30:35.612312 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3262e0c3-5db2-43f9-8638-73ac262903d3-kube-api-access-s975s" (OuterVolumeSpecName: "kube-api-access-s975s") pod "3262e0c3-5db2-43f9-8638-73ac262903d3" (UID: "3262e0c3-5db2-43f9-8638-73ac262903d3"). InnerVolumeSpecName "kube-api-access-s975s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:30:35 crc kubenswrapper[4925]: I0202 12:30:35.657064 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3262e0c3-5db2-43f9-8638-73ac262903d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3262e0c3-5db2-43f9-8638-73ac262903d3" (UID: "3262e0c3-5db2-43f9-8638-73ac262903d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:30:35 crc kubenswrapper[4925]: I0202 12:30:35.700468 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3262e0c3-5db2-43f9-8638-73ac262903d3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 12:30:35 crc kubenswrapper[4925]: I0202 12:30:35.700516 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3262e0c3-5db2-43f9-8638-73ac262903d3-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 12:30:35 crc kubenswrapper[4925]: I0202 12:30:35.700527 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s975s\" (UniqueName: \"kubernetes.io/projected/3262e0c3-5db2-43f9-8638-73ac262903d3-kube-api-access-s975s\") on node \"crc\" DevicePath \"\"" Feb 02 12:30:36 crc kubenswrapper[4925]: I0202 12:30:36.200221 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6zb4" event={"ID":"3262e0c3-5db2-43f9-8638-73ac262903d3","Type":"ContainerDied","Data":"f5aac6589879b11a4f0ab598ca97934fcd939cc8733256720c487833b630bab5"} Feb 02 12:30:36 crc kubenswrapper[4925]: I0202 12:30:36.200548 4925 scope.go:117] "RemoveContainer" containerID="1e3cd034ec75864d385ca53182cf8d092b24f2fb8fb1ac229d23fb026fb032e2" Feb 02 12:30:36 crc kubenswrapper[4925]: I0202 12:30:36.200751 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6zb4" Feb 02 12:30:36 crc kubenswrapper[4925]: I0202 12:30:36.231616 4925 scope.go:117] "RemoveContainer" containerID="da39baf8bb1541817fb4edfa36c058c31db98ed28ddc6d9c30cd6d92f9ee619e" Feb 02 12:30:36 crc kubenswrapper[4925]: I0202 12:30:36.238439 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z6zb4"] Feb 02 12:30:36 crc kubenswrapper[4925]: I0202 12:30:36.247310 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z6zb4"] Feb 02 12:30:36 crc kubenswrapper[4925]: I0202 12:30:36.259995 4925 scope.go:117] "RemoveContainer" containerID="417b2f6b592921f8233adf2cda8115215296a6464e1a6b0a7b5d47b2f416f140" Feb 02 12:30:36 crc kubenswrapper[4925]: I0202 12:30:36.673441 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3262e0c3-5db2-43f9-8638-73ac262903d3" path="/var/lib/kubelet/pods/3262e0c3-5db2-43f9-8638-73ac262903d3/volumes" Feb 02 12:30:43 crc kubenswrapper[4925]: I0202 12:30:43.398241 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:30:43 crc kubenswrapper[4925]: I0202 12:30:43.398759 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:30:43 crc kubenswrapper[4925]: I0202 12:30:43.398801 4925 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" Feb 02 12:30:43 crc kubenswrapper[4925]: I0202 12:30:43.399623 4925 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a128802a4a0d299eb9be866db4d78017908c5fe4686a29df2c4863a75a470b9a"} pod="openshift-machine-config-operator/machine-config-daemon-fphfd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 12:30:43 crc kubenswrapper[4925]: I0202 12:30:43.399775 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" containerID="cri-o://a128802a4a0d299eb9be866db4d78017908c5fe4686a29df2c4863a75a470b9a" gracePeriod=600 Feb 02 12:30:44 crc kubenswrapper[4925]: I0202 12:30:44.269815 4925 generic.go:334] "Generic (PLEG): container finished" podID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerID="a128802a4a0d299eb9be866db4d78017908c5fe4686a29df2c4863a75a470b9a" exitCode=0 Feb 02 12:30:44 crc kubenswrapper[4925]: I0202 12:30:44.269903 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerDied","Data":"a128802a4a0d299eb9be866db4d78017908c5fe4686a29df2c4863a75a470b9a"} Feb 02 12:30:44 crc kubenswrapper[4925]: I0202 12:30:44.270311 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" event={"ID":"08797ee8-d3b4-4eed-8482-c19a5b6b87c4","Type":"ContainerStarted","Data":"fbca24d331c36ad56ad674fbe239cce42c71c19734b8fe5c2b15dc88b7e70587"} Feb 02 12:30:44 crc kubenswrapper[4925]: I0202 12:30:44.270335 4925 scope.go:117] "RemoveContainer" containerID="50b810b45df1b671297ce55ac4708622f29bebbcc5273069135c8617fd8eaca5" Feb 02 12:31:31 crc kubenswrapper[4925]: I0202 12:31:31.785186 4925 scope.go:117] "RemoveContainer" containerID="748526a2aab14ed3bb3c84e3ea9688cab32cd8dcf8e5c2f45d2271165de25d2b" Feb 02 12:31:42 crc kubenswrapper[4925]: I0202 12:31:42.230585 4925 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fljl9"] Feb 02 12:31:42 crc kubenswrapper[4925]: E0202 12:31:42.232194 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3262e0c3-5db2-43f9-8638-73ac262903d3" containerName="extract-utilities" Feb 02 12:31:42 crc kubenswrapper[4925]: I0202 12:31:42.232219 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="3262e0c3-5db2-43f9-8638-73ac262903d3" containerName="extract-utilities" Feb 02 12:31:42 crc kubenswrapper[4925]: E0202 12:31:42.232240 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3262e0c3-5db2-43f9-8638-73ac262903d3" containerName="extract-content" Feb 02 12:31:42 crc kubenswrapper[4925]: I0202 12:31:42.232247 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="3262e0c3-5db2-43f9-8638-73ac262903d3" containerName="extract-content" Feb 02 12:31:42 crc kubenswrapper[4925]: E0202 12:31:42.232262 4925 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3262e0c3-5db2-43f9-8638-73ac262903d3" containerName="registry-server" Feb 02 12:31:42 crc kubenswrapper[4925]: I0202 12:31:42.232271 4925 state_mem.go:107] "Deleted CPUSet assignment" podUID="3262e0c3-5db2-43f9-8638-73ac262903d3" containerName="registry-server" Feb 02 12:31:42 crc kubenswrapper[4925]: I0202 12:31:42.232522 4925 memory_manager.go:354] "RemoveStaleState removing state" podUID="3262e0c3-5db2-43f9-8638-73ac262903d3" containerName="registry-server" Feb 02 12:31:42 crc kubenswrapper[4925]: I0202 12:31:42.234843 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fljl9" Feb 02 12:31:42 crc kubenswrapper[4925]: I0202 12:31:42.268803 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fljl9"] Feb 02 12:31:42 crc kubenswrapper[4925]: I0202 12:31:42.295117 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f0c641-261f-41d1-8289-4003b43bffe1-utilities\") pod \"redhat-operators-fljl9\" (UID: \"06f0c641-261f-41d1-8289-4003b43bffe1\") " pod="openshift-marketplace/redhat-operators-fljl9" Feb 02 12:31:42 crc kubenswrapper[4925]: I0202 12:31:42.295396 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f0c641-261f-41d1-8289-4003b43bffe1-catalog-content\") pod \"redhat-operators-fljl9\" (UID: \"06f0c641-261f-41d1-8289-4003b43bffe1\") " pod="openshift-marketplace/redhat-operators-fljl9" Feb 02 12:31:42 crc kubenswrapper[4925]: I0202 12:31:42.295595 4925 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq8lk\" (UniqueName: \"kubernetes.io/projected/06f0c641-261f-41d1-8289-4003b43bffe1-kube-api-access-gq8lk\") pod \"redhat-operators-fljl9\" (UID: \"06f0c641-261f-41d1-8289-4003b43bffe1\") " pod="openshift-marketplace/redhat-operators-fljl9" Feb 02 12:31:42 crc kubenswrapper[4925]: I0202 12:31:42.398266 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f0c641-261f-41d1-8289-4003b43bffe1-catalog-content\") pod \"redhat-operators-fljl9\" (UID: \"06f0c641-261f-41d1-8289-4003b43bffe1\") " pod="openshift-marketplace/redhat-operators-fljl9" Feb 02 12:31:42 crc kubenswrapper[4925]: I0202 12:31:42.398370 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq8lk\" (UniqueName: \"kubernetes.io/projected/06f0c641-261f-41d1-8289-4003b43bffe1-kube-api-access-gq8lk\") pod \"redhat-operators-fljl9\" (UID: \"06f0c641-261f-41d1-8289-4003b43bffe1\") " pod="openshift-marketplace/redhat-operators-fljl9" Feb 02 12:31:42 crc kubenswrapper[4925]: I0202 12:31:42.398434 4925 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f0c641-261f-41d1-8289-4003b43bffe1-utilities\") pod \"redhat-operators-fljl9\" (UID: \"06f0c641-261f-41d1-8289-4003b43bffe1\") " pod="openshift-marketplace/redhat-operators-fljl9" Feb 02 12:31:42 crc kubenswrapper[4925]: I0202 12:31:42.398935 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f0c641-261f-41d1-8289-4003b43bffe1-utilities\") pod \"redhat-operators-fljl9\" (UID: \"06f0c641-261f-41d1-8289-4003b43bffe1\") " pod="openshift-marketplace/redhat-operators-fljl9" Feb 02 12:31:42 crc kubenswrapper[4925]: I0202 12:31:42.399224 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f0c641-261f-41d1-8289-4003b43bffe1-catalog-content\") pod \"redhat-operators-fljl9\" (UID: \"06f0c641-261f-41d1-8289-4003b43bffe1\") " pod="openshift-marketplace/redhat-operators-fljl9" Feb 02 12:31:42 crc kubenswrapper[4925]: I0202 12:31:42.424060 4925 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq8lk\" (UniqueName: \"kubernetes.io/projected/06f0c641-261f-41d1-8289-4003b43bffe1-kube-api-access-gq8lk\") pod \"redhat-operators-fljl9\" (UID: \"06f0c641-261f-41d1-8289-4003b43bffe1\") " pod="openshift-marketplace/redhat-operators-fljl9" Feb 02 12:31:42 crc kubenswrapper[4925]: I0202 12:31:42.603721 4925 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fljl9" Feb 02 12:31:43 crc kubenswrapper[4925]: I0202 12:31:43.122315 4925 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fljl9"] Feb 02 12:31:43 crc kubenswrapper[4925]: I0202 12:31:43.801409 4925 generic.go:334] "Generic (PLEG): container finished" podID="06f0c641-261f-41d1-8289-4003b43bffe1" containerID="c4451bb4d6834de58d3145a15f86536c5babb3b2889da40b13ec81b214037e39" exitCode=0 Feb 02 12:31:43 crc kubenswrapper[4925]: I0202 12:31:43.801461 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fljl9" event={"ID":"06f0c641-261f-41d1-8289-4003b43bffe1","Type":"ContainerDied","Data":"c4451bb4d6834de58d3145a15f86536c5babb3b2889da40b13ec81b214037e39"} Feb 02 12:31:43 crc kubenswrapper[4925]: I0202 12:31:43.801753 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fljl9" event={"ID":"06f0c641-261f-41d1-8289-4003b43bffe1","Type":"ContainerStarted","Data":"2db5b63be76d76454adead39a336cc8224bd8a4014e7ac7633a689ccaaad5f6c"} Feb 02 12:31:45 crc kubenswrapper[4925]: I0202 12:31:45.822007 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fljl9" event={"ID":"06f0c641-261f-41d1-8289-4003b43bffe1","Type":"ContainerStarted","Data":"b48ab06ff59442d3d2af7a5d6aad4eff3ed6929d4c4105fb539604718900999c"} Feb 02 12:31:47 crc kubenswrapper[4925]: I0202 12:31:47.847616 4925 generic.go:334] "Generic (PLEG): container finished" podID="06f0c641-261f-41d1-8289-4003b43bffe1" containerID="b48ab06ff59442d3d2af7a5d6aad4eff3ed6929d4c4105fb539604718900999c" exitCode=0 Feb 02 12:31:47 crc kubenswrapper[4925]: I0202 12:31:47.847728 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fljl9" event={"ID":"06f0c641-261f-41d1-8289-4003b43bffe1","Type":"ContainerDied","Data":"b48ab06ff59442d3d2af7a5d6aad4eff3ed6929d4c4105fb539604718900999c"} Feb 02 12:31:48 crc kubenswrapper[4925]: I0202 12:31:48.859016 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fljl9" event={"ID":"06f0c641-261f-41d1-8289-4003b43bffe1","Type":"ContainerStarted","Data":"81313d2ccb8e0fd0df9410385ae4d2c15ddc42284055c30369fb71dcbceedd3c"} Feb 02 12:31:48 crc kubenswrapper[4925]: I0202 12:31:48.879835 4925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fljl9" podStartSLOduration=2.389766538 podStartE2EDuration="6.879799004s" podCreationTimestamp="2026-02-02 12:31:42 +0000 UTC" firstStartedPulling="2026-02-02 12:31:43.803264651 +0000 UTC m=+5680.807513613" lastFinishedPulling="2026-02-02 12:31:48.293297117 +0000 UTC m=+5685.297546079" observedRunningTime="2026-02-02 12:31:48.878449687 +0000 UTC m=+5685.882698669" watchObservedRunningTime="2026-02-02 12:31:48.879799004 +0000 UTC m=+5685.884047966" Feb 02 12:31:52 crc kubenswrapper[4925]: I0202 12:31:52.604500 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fljl9" Feb 02 12:31:52 crc kubenswrapper[4925]: I0202 12:31:52.604983 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fljl9" Feb 02 12:31:53 crc kubenswrapper[4925]: I0202 12:31:53.651036 4925 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fljl9" podUID="06f0c641-261f-41d1-8289-4003b43bffe1" containerName="registry-server" probeResult="failure" output=< Feb 02 12:31:53 crc kubenswrapper[4925]: timeout: failed to connect service ":50051" within 1s Feb 02 12:31:53 crc kubenswrapper[4925]: > Feb 02 12:32:02 crc kubenswrapper[4925]: I0202 12:32:02.650977 4925 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fljl9" Feb 02 12:32:02 crc kubenswrapper[4925]: I0202 12:32:02.707061 4925 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fljl9" Feb 02 12:32:02 crc kubenswrapper[4925]: I0202 12:32:02.885143 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fljl9"] Feb 02 12:32:03 crc kubenswrapper[4925]: I0202 12:32:03.986609 4925 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fljl9" podUID="06f0c641-261f-41d1-8289-4003b43bffe1" containerName="registry-server" containerID="cri-o://81313d2ccb8e0fd0df9410385ae4d2c15ddc42284055c30369fb71dcbceedd3c" gracePeriod=2 Feb 02 12:32:04 crc kubenswrapper[4925]: I0202 12:32:04.425703 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fljl9" Feb 02 12:32:04 crc kubenswrapper[4925]: I0202 12:32:04.613062 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq8lk\" (UniqueName: \"kubernetes.io/projected/06f0c641-261f-41d1-8289-4003b43bffe1-kube-api-access-gq8lk\") pod \"06f0c641-261f-41d1-8289-4003b43bffe1\" (UID: \"06f0c641-261f-41d1-8289-4003b43bffe1\") " Feb 02 12:32:04 crc kubenswrapper[4925]: I0202 12:32:04.613133 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f0c641-261f-41d1-8289-4003b43bffe1-catalog-content\") pod \"06f0c641-261f-41d1-8289-4003b43bffe1\" (UID: \"06f0c641-261f-41d1-8289-4003b43bffe1\") " Feb 02 12:32:04 crc kubenswrapper[4925]: I0202 12:32:04.613316 4925 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f0c641-261f-41d1-8289-4003b43bffe1-utilities\") pod \"06f0c641-261f-41d1-8289-4003b43bffe1\" (UID: \"06f0c641-261f-41d1-8289-4003b43bffe1\") " Feb 02 12:32:04 crc kubenswrapper[4925]: I0202 12:32:04.613965 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f0c641-261f-41d1-8289-4003b43bffe1-utilities" (OuterVolumeSpecName: "utilities") pod "06f0c641-261f-41d1-8289-4003b43bffe1" (UID: "06f0c641-261f-41d1-8289-4003b43bffe1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:32:04 crc kubenswrapper[4925]: I0202 12:32:04.618581 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06f0c641-261f-41d1-8289-4003b43bffe1-kube-api-access-gq8lk" (OuterVolumeSpecName: "kube-api-access-gq8lk") pod "06f0c641-261f-41d1-8289-4003b43bffe1" (UID: "06f0c641-261f-41d1-8289-4003b43bffe1"). InnerVolumeSpecName "kube-api-access-gq8lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:32:04 crc kubenswrapper[4925]: I0202 12:32:04.716404 4925 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06f0c641-261f-41d1-8289-4003b43bffe1-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 12:32:04 crc kubenswrapper[4925]: I0202 12:32:04.716462 4925 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq8lk\" (UniqueName: \"kubernetes.io/projected/06f0c641-261f-41d1-8289-4003b43bffe1-kube-api-access-gq8lk\") on node \"crc\" DevicePath \"\"" Feb 02 12:32:04 crc kubenswrapper[4925]: I0202 12:32:04.734223 4925 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f0c641-261f-41d1-8289-4003b43bffe1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06f0c641-261f-41d1-8289-4003b43bffe1" (UID: "06f0c641-261f-41d1-8289-4003b43bffe1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:32:04 crc kubenswrapper[4925]: I0202 12:32:04.817880 4925 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06f0c641-261f-41d1-8289-4003b43bffe1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 12:32:04 crc kubenswrapper[4925]: I0202 12:32:04.997396 4925 generic.go:334] "Generic (PLEG): container finished" podID="06f0c641-261f-41d1-8289-4003b43bffe1" containerID="81313d2ccb8e0fd0df9410385ae4d2c15ddc42284055c30369fb71dcbceedd3c" exitCode=0 Feb 02 12:32:04 crc kubenswrapper[4925]: I0202 12:32:04.997442 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fljl9" event={"ID":"06f0c641-261f-41d1-8289-4003b43bffe1","Type":"ContainerDied","Data":"81313d2ccb8e0fd0df9410385ae4d2c15ddc42284055c30369fb71dcbceedd3c"} Feb 02 12:32:04 crc kubenswrapper[4925]: I0202 12:32:04.997472 4925 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fljl9" event={"ID":"06f0c641-261f-41d1-8289-4003b43bffe1","Type":"ContainerDied","Data":"2db5b63be76d76454adead39a336cc8224bd8a4014e7ac7633a689ccaaad5f6c"} Feb 02 12:32:04 crc kubenswrapper[4925]: I0202 12:32:04.997493 4925 scope.go:117] "RemoveContainer" containerID="81313d2ccb8e0fd0df9410385ae4d2c15ddc42284055c30369fb71dcbceedd3c" Feb 02 12:32:04 crc kubenswrapper[4925]: I0202 12:32:04.997484 4925 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fljl9" Feb 02 12:32:05 crc kubenswrapper[4925]: I0202 12:32:05.021429 4925 scope.go:117] "RemoveContainer" containerID="b48ab06ff59442d3d2af7a5d6aad4eff3ed6929d4c4105fb539604718900999c" Feb 02 12:32:05 crc kubenswrapper[4925]: I0202 12:32:05.033183 4925 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fljl9"] Feb 02 12:32:05 crc kubenswrapper[4925]: I0202 12:32:05.042549 4925 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fljl9"] Feb 02 12:32:05 crc kubenswrapper[4925]: I0202 12:32:05.046041 4925 scope.go:117] "RemoveContainer" containerID="c4451bb4d6834de58d3145a15f86536c5babb3b2889da40b13ec81b214037e39" Feb 02 12:32:05 crc kubenswrapper[4925]: I0202 12:32:05.083197 4925 scope.go:117] "RemoveContainer" containerID="81313d2ccb8e0fd0df9410385ae4d2c15ddc42284055c30369fb71dcbceedd3c" Feb 02 12:32:05 crc kubenswrapper[4925]: E0202 12:32:05.083753 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81313d2ccb8e0fd0df9410385ae4d2c15ddc42284055c30369fb71dcbceedd3c\": container with ID starting with 81313d2ccb8e0fd0df9410385ae4d2c15ddc42284055c30369fb71dcbceedd3c not found: ID does not exist" containerID="81313d2ccb8e0fd0df9410385ae4d2c15ddc42284055c30369fb71dcbceedd3c" Feb 02 12:32:05 crc kubenswrapper[4925]: I0202 12:32:05.083795 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81313d2ccb8e0fd0df9410385ae4d2c15ddc42284055c30369fb71dcbceedd3c"} err="failed to get container status \"81313d2ccb8e0fd0df9410385ae4d2c15ddc42284055c30369fb71dcbceedd3c\": rpc error: code = NotFound desc = could not find container \"81313d2ccb8e0fd0df9410385ae4d2c15ddc42284055c30369fb71dcbceedd3c\": container with ID starting with 81313d2ccb8e0fd0df9410385ae4d2c15ddc42284055c30369fb71dcbceedd3c not found: ID does not exist" Feb 02 12:32:05 crc kubenswrapper[4925]: I0202 12:32:05.083821 4925 scope.go:117] "RemoveContainer" containerID="b48ab06ff59442d3d2af7a5d6aad4eff3ed6929d4c4105fb539604718900999c" Feb 02 12:32:05 crc kubenswrapper[4925]: E0202 12:32:05.084339 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b48ab06ff59442d3d2af7a5d6aad4eff3ed6929d4c4105fb539604718900999c\": container with ID starting with b48ab06ff59442d3d2af7a5d6aad4eff3ed6929d4c4105fb539604718900999c not found: ID does not exist" containerID="b48ab06ff59442d3d2af7a5d6aad4eff3ed6929d4c4105fb539604718900999c" Feb 02 12:32:05 crc kubenswrapper[4925]: I0202 12:32:05.084468 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b48ab06ff59442d3d2af7a5d6aad4eff3ed6929d4c4105fb539604718900999c"} err="failed to get container status \"b48ab06ff59442d3d2af7a5d6aad4eff3ed6929d4c4105fb539604718900999c\": rpc error: code = NotFound desc = could not find container \"b48ab06ff59442d3d2af7a5d6aad4eff3ed6929d4c4105fb539604718900999c\": container with ID starting with b48ab06ff59442d3d2af7a5d6aad4eff3ed6929d4c4105fb539604718900999c not found: ID does not exist" Feb 02 12:32:05 crc kubenswrapper[4925]: I0202 12:32:05.084569 4925 scope.go:117] "RemoveContainer" containerID="c4451bb4d6834de58d3145a15f86536c5babb3b2889da40b13ec81b214037e39" Feb 02 12:32:05 crc kubenswrapper[4925]: E0202 12:32:05.084992 4925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4451bb4d6834de58d3145a15f86536c5babb3b2889da40b13ec81b214037e39\": container with ID starting with c4451bb4d6834de58d3145a15f86536c5babb3b2889da40b13ec81b214037e39 not found: ID does not exist" containerID="c4451bb4d6834de58d3145a15f86536c5babb3b2889da40b13ec81b214037e39" Feb 02 12:32:05 crc kubenswrapper[4925]: I0202 12:32:05.085048 4925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4451bb4d6834de58d3145a15f86536c5babb3b2889da40b13ec81b214037e39"} err="failed to get container status \"c4451bb4d6834de58d3145a15f86536c5babb3b2889da40b13ec81b214037e39\": rpc error: code = NotFound desc = could not find container \"c4451bb4d6834de58d3145a15f86536c5babb3b2889da40b13ec81b214037e39\": container with ID starting with c4451bb4d6834de58d3145a15f86536c5babb3b2889da40b13ec81b214037e39 not found: ID does not exist" Feb 02 12:32:06 crc kubenswrapper[4925]: I0202 12:32:06.677598 4925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06f0c641-261f-41d1-8289-4003b43bffe1" path="/var/lib/kubelet/pods/06f0c641-261f-41d1-8289-4003b43bffe1/volumes" Feb 02 12:32:43 crc kubenswrapper[4925]: I0202 12:32:43.398537 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:32:43 crc kubenswrapper[4925]: I0202 12:32:43.399198 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:33:13 crc kubenswrapper[4925]: I0202 12:33:13.398725 4925 patch_prober.go:28] interesting pod/machine-config-daemon-fphfd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:33:13 crc kubenswrapper[4925]: I0202 12:33:13.399371 4925 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fphfd" podUID="08797ee8-d3b4-4eed-8482-c19a5b6b87c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"